Mar 19 16:40:35 crc systemd[1]: Starting Kubernetes Kubelet... Mar 19 16:40:35 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:35 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 19 16:40:36 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 19 16:40:37 crc kubenswrapper[4792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 16:40:37 crc kubenswrapper[4792]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 19 16:40:37 crc kubenswrapper[4792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 16:40:37 crc kubenswrapper[4792]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 16:40:37 crc kubenswrapper[4792]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 16:40:37 crc kubenswrapper[4792]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.468883 4792 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475689 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475726 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475739 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475751 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475761 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475773 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475782 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475790 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475798 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475809 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475819 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475827 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475834 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475869 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475877 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475886 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475895 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475903 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475911 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475919 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475957 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475967 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475975 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475982 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475990 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.475999 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476007 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476014 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476022 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476030 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476038 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476046 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476054 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476061 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476069 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476077 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476084 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476092 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476100 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476107 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476115 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476124 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476131 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476139 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476146 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476154 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476161 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476169 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476178 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476186 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476198 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476209 4792 feature_gate.go:330] unrecognized feature gate: Example Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476218 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476226 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476234 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476242 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476250 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476261 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476273 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476281 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476289 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476297 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476307 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476317 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476325 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476333 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476342 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476350 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476358 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476366 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.476374 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477388 4792 flags.go:64] FLAG: --address="0.0.0.0" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477415 4792 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477430 4792 flags.go:64] FLAG: --anonymous-auth="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477443 4792 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477455 4792 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477465 4792 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477477 4792 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477498 4792 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477508 4792 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477517 4792 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477527 4792 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477537 4792 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477547 4792 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477556 4792 flags.go:64] FLAG: --cgroup-root="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477565 4792 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477574 4792 flags.go:64] FLAG: --client-ca-file="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477582 4792 flags.go:64] FLAG: --cloud-config="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477591 4792 flags.go:64] FLAG: --cloud-provider="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477600 4792 flags.go:64] FLAG: --cluster-dns="[]" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477613 4792 flags.go:64] FLAG: --cluster-domain="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477621 4792 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477630 4792 flags.go:64] FLAG: --config-dir="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477639 4792 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477649 4792 flags.go:64] FLAG: --container-log-max-files="5" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477660 4792 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477669 4792 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477678 4792 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477688 4792 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477698 4792 flags.go:64] FLAG: --contention-profiling="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477708 4792 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477717 4792 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477727 4792 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477735 4792 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477756 4792 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477765 4792 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477774 4792 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477783 4792 flags.go:64] FLAG: --enable-load-reader="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477792 4792 flags.go:64] FLAG: --enable-server="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477801 4792 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477813 4792 flags.go:64] FLAG: --event-burst="100" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477822 4792 flags.go:64] FLAG: --event-qps="50" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477833 4792 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477869 4792 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477880 4792 flags.go:64] FLAG: --eviction-hard="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477892 4792 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477901 4792 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477911 4792 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477920 4792 flags.go:64] FLAG: --eviction-soft="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477930 4792 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477939 4792 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477948 4792 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477957 4792 flags.go:64] FLAG: --experimental-mounter-path="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477966 4792 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477975 4792 flags.go:64] FLAG: --fail-swap-on="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477985 4792 flags.go:64] FLAG: --feature-gates="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.477996 4792 flags.go:64] FLAG: --file-check-frequency="20s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478005 4792 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478015 4792 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478024 4792 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478034 4792 flags.go:64] FLAG: --healthz-port="10248" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478043 4792 flags.go:64] FLAG: --help="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478052 4792 flags.go:64] FLAG: --hostname-override="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478061 4792 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478071 4792 flags.go:64] FLAG: --http-check-frequency="20s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478080 4792 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478089 4792 flags.go:64] FLAG: --image-credential-provider-config="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478098 4792 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478108 4792 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478117 4792 flags.go:64] FLAG: --image-service-endpoint="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478125 4792 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478135 4792 flags.go:64] FLAG: --kube-api-burst="100" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478144 4792 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478154 4792 flags.go:64] FLAG: --kube-api-qps="50" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478163 4792 flags.go:64] FLAG: --kube-reserved="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478172 4792 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478182 4792 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478192 4792 flags.go:64] FLAG: --kubelet-cgroups="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478200 4792 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478209 4792 flags.go:64] FLAG: --lock-file="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478219 4792 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478228 4792 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478238 4792 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478251 4792 flags.go:64] FLAG: --log-json-split-stream="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478260 4792 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478269 4792 flags.go:64] FLAG: --log-text-split-stream="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478278 4792 flags.go:64] FLAG: --logging-format="text" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478288 4792 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478297 4792 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478306 4792 flags.go:64] FLAG: --manifest-url="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478315 4792 flags.go:64] FLAG: --manifest-url-header="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478327 4792 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478337 4792 flags.go:64] FLAG: --max-open-files="1000000" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478348 4792 flags.go:64] FLAG: --max-pods="110" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478358 4792 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478367 4792 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478376 4792 flags.go:64] FLAG: --memory-manager-policy="None" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478385 4792 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478395 4792 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478404 4792 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478414 4792 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478436 4792 flags.go:64] FLAG: --node-status-max-images="50" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478446 4792 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478456 4792 flags.go:64] FLAG: --oom-score-adj="-999" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478466 4792 flags.go:64] FLAG: --pod-cidr="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478476 4792 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478490 4792 flags.go:64] FLAG: --pod-manifest-path="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478499 4792 flags.go:64] FLAG: --pod-max-pids="-1" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478509 4792 flags.go:64] FLAG: --pods-per-core="0" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478519 4792 flags.go:64] FLAG: --port="10250" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478528 4792 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478536 4792 flags.go:64] FLAG: --provider-id="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478545 4792 flags.go:64] FLAG: --qos-reserved="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478555 4792 flags.go:64] FLAG: --read-only-port="10255" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478564 4792 flags.go:64] FLAG: --register-node="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478573 4792 flags.go:64] FLAG: --register-schedulable="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478585 4792 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478600 4792 flags.go:64] FLAG: --registry-burst="10" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478610 4792 flags.go:64] FLAG: --registry-qps="5" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478624 4792 flags.go:64] FLAG: --reserved-cpus="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478632 4792 flags.go:64] FLAG: --reserved-memory="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478644 4792 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478652 4792 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478662 4792 flags.go:64] FLAG: --rotate-certificates="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478671 4792 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478680 4792 flags.go:64] FLAG: --runonce="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478688 4792 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478698 4792 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478707 4792 flags.go:64] FLAG: --seccomp-default="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478716 4792 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478725 4792 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478769 4792 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478778 4792 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478787 4792 flags.go:64] FLAG: --storage-driver-password="root" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478797 4792 flags.go:64] FLAG: --storage-driver-secure="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478806 4792 flags.go:64] FLAG: --storage-driver-table="stats" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478815 4792 flags.go:64] FLAG: --storage-driver-user="root" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478824 4792 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478833 4792 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478865 4792 flags.go:64] FLAG: --system-cgroups="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478875 4792 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478889 4792 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478898 4792 flags.go:64] FLAG: --tls-cert-file="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478908 4792 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478922 4792 flags.go:64] FLAG: --tls-min-version="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478931 4792 flags.go:64] FLAG: --tls-private-key-file="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478940 4792 flags.go:64] FLAG: --topology-manager-policy="none" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478949 4792 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478958 4792 flags.go:64] FLAG: --topology-manager-scope="container" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478968 4792 flags.go:64] FLAG: --v="2" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478981 4792 flags.go:64] FLAG: --version="false" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.478993 4792 flags.go:64] FLAG: --vmodule="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.479005 4792 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.479015 4792 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479263 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479276 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479287 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479296 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479305 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479314 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479323 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479331 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479340 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479348 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479355 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479364 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479376 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479384 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479392 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479400 4792 feature_gate.go:330] unrecognized feature gate: Example Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479408 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479416 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479424 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479432 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479439 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479447 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479455 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479462 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479470 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479477 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479485 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479493 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479501 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479509 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479517 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479524 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479533 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479540 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479549 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479557 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479564 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479573 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479580 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479590 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479599 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479607 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479614 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479623 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479633 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479641 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479649 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479656 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479665 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479675 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479684 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479693 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479702 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479710 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479718 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479725 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479733 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479740 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479748 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479756 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479764 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479771 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479779 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479786 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479796 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479807 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479815 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479824 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479832 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479867 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.479878 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.479927 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.495047 4792 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.495092 4792 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495305 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495326 4792 feature_gate.go:330] unrecognized feature gate: Example Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495340 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495352 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495363 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495375 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495387 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495398 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495409 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495420 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495431 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495442 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495459 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495476 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495489 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495502 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495514 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495525 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495538 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495553 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495568 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495582 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495594 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495606 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495617 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495632 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495644 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495655 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495665 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495676 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495688 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495699 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495710 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495722 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495757 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495767 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495780 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495796 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495872 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495887 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495899 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495911 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495922 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495934 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495944 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495959 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495969 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495981 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.495992 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496003 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496014 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496025 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496037 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496048 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496058 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496069 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496080 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496092 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496103 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496115 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496126 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496139 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496151 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496163 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496175 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496191 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496205 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496217 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496229 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496240 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496251 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.496269 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496665 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496687 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496699 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496710 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496720 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496731 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496742 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496753 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496763 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496775 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496787 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496798 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496809 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496822 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496833 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496879 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496893 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496904 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496914 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496926 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496937 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496953 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496969 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496981 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.496993 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497007 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497020 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497031 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497043 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497057 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497072 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497083 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497097 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497110 4792 feature_gate.go:330] unrecognized feature gate: Example Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497122 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497134 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497145 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497160 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497173 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497185 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497196 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497208 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497219 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497230 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497242 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497253 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497264 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497276 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497287 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497298 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497309 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497320 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497332 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497344 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497354 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497366 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497377 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497388 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497401 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497413 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497424 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497437 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497448 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497460 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497471 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497482 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497493 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497505 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497520 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497536 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.497549 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.497567 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.498986 4792 server.go:940] "Client rotation is on, will bootstrap in background" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.508692 4792 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.513527 4792 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.513700 4792 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.515590 4792 server.go:997] "Starting client certificate rotation" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.515639 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.516366 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.542111 4792 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.545271 4792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.547123 4792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.566163 4792 log.go:25] "Validated CRI v1 runtime API" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.606287 4792 log.go:25] "Validated CRI v1 image API" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.610756 4792 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.617738 4792 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-19-16-35-31-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.617790 4792 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.641053 4792 manager.go:217] Machine: {Timestamp:2026-03-19 16:40:37.637115935 +0000 UTC m=+0.783173555 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:acd41293-7f2d-450a-aedf-420c50056810 BootID:5b1791f1-2726-4ce2-afce-76ff4bb66f00 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8b:87:46 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8b:87:46 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:48:a4:87 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:56:c1:2f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:cf:d1:f2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2b:9f:e0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:b8:29:59:d4:21 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:2d:cc:30:c4:a7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.641516 4792 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.641777 4792 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.644668 4792 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.645040 4792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.645126 4792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.645454 4792 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.645475 4792 container_manager_linux.go:303] "Creating device plugin manager" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.646098 4792 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.646177 4792 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.646454 4792 state_mem.go:36] "Initialized new in-memory state store" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.646607 4792 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.649944 4792 kubelet.go:418] "Attempting to sync node with API server" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.649982 4792 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.650068 4792 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.650136 4792 kubelet.go:324] "Adding apiserver pod source" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.650160 4792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.655458 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.655608 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.655565 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.655698 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.656775 4792 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.658278 4792 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.661658 4792 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663384 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663471 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663545 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663598 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663656 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663706 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663759 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663814 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663906 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.663969 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.664050 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.664103 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.666344 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.666983 4792 server.go:1280] "Started kubelet" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.668128 4792 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.668115 4792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 16:40:37 crc systemd[1]: Started Kubernetes Kubelet. Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.670071 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.670492 4792 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.671745 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.671806 4792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.672124 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.672189 4792 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.672205 4792 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.672228 4792 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.672596 4792 server.go:460] "Adding debug handlers to kubelet server" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.673644 4792 factory.go:55] Registering systemd factory Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.673697 4792 factory.go:221] Registration of the systemd container factory successfully Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.673744 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.673904 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.673632 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.677576 4792 factory.go:153] Registering CRI-O factory Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.677608 4792 factory.go:221] Registration of the crio container factory successfully Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.677679 4792 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.677708 4792 factory.go:103] Registering Raw factory Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.677732 4792 manager.go:1196] Started watching for new ooms in manager Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.678386 4792 manager.go:319] Starting recovery of all containers Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.677382 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e4b9c3e8cc37e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,LastTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.679376 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681263 4792 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681301 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681320 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681335 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681346 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681357 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681374 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681391 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681410 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681425 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681440 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681456 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681476 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681493 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681507 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681519 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681534 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681549 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681565 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681578 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681595 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681638 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681653 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681669 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681684 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681699 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681722 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681737 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681789 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681805 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681819 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681871 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681888 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681903 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681918 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681935 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681949 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681964 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681979 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.681994 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682008 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682024 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682038 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682053 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682067 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682083 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682096 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682110 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682123 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682135 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682149 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682166 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682186 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682203 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682220 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682237 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682251 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682270 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682284 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682298 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682311 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682324 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682734 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682756 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.682768 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.683959 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684062 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684126 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684163 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684202 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684325 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684360 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684409 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684449 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684485 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684535 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684564 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684601 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684632 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684657 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684688 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684710 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684733 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684765 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684789 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684822 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684870 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684900 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684935 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684957 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.684986 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685009 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685036 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685070 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685106 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685139 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685352 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685374 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685401 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685420 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685446 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685464 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685481 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685505 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685536 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685566 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685596 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685671 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685694 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685753 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685775 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685810 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685872 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685914 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685962 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.685991 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686017 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686039 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686058 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686087 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686110 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686137 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686157 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686177 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686204 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686223 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686243 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686269 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686291 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686317 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686339 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686358 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686383 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686403 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686429 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686448 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686469 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686494 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686512 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686533 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686551 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686569 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686594 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686615 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686638 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686655 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686674 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686697 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686715 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686736 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686759 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686777 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.686800 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687056 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687080 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687109 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687129 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687684 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687713 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687738 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687756 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687774 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687796 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.687814 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688390 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688437 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688461 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688489 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688510 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688719 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688789 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688878 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688923 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688955 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.688994 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689029 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689069 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689102 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689132 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689176 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689206 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689254 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689286 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689316 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689357 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689392 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689430 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689461 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689494 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689534 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689563 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689602 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689631 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689690 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689733 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.689762 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.690117 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.690954 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.690982 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691034 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691053 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691073 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691092 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691108 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691127 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691143 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691160 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691173 4792 reconstruct.go:97] "Volume reconstruction finished" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.691188 4792 reconciler.go:26] "Reconciler: start to sync state" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.710763 4792 manager.go:324] Recovery completed Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.727671 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.729267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.729466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.729599 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.733956 4792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.735763 4792 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.735795 4792 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.735821 4792 state_mem.go:36] "Initialized new in-memory state store" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.737237 4792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.737833 4792 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.738227 4792 kubelet.go:2335] "Starting kubelet main sync loop" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.738687 4792 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 16:40:37 crc kubenswrapper[4792]: W0319 16:40:37.739565 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.739668 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.750705 4792 policy_none.go:49] "None policy: Start" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.752020 4792 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.752049 4792 state_mem.go:35] "Initializing new in-memory state store" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.773416 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.796004 4792 manager.go:334] "Starting Device Plugin manager" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.796068 4792 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.796087 4792 server.go:79] "Starting device plugin registration server" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.796824 4792 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.796935 4792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.797161 4792 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.797274 4792 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.797285 4792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.810149 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.840365 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.840480 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.841462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.841502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.841512 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.841646 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.841889 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.841946 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.842605 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.842633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.842644 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.842754 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.842899 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.842930 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843457 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843482 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843618 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843709 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.843737 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844192 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844202 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844546 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844563 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844590 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844608 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.844551 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.845473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.845511 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.845526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.845569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.845617 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.845642 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.845962 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.846011 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.847565 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.847601 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.847612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.874524 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.893791 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.893890 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.893927 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.893962 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.893998 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894068 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894104 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894140 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894157 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894233 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894251 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894270 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894288 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.894361 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.897195 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.899162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.899202 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.899214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.899243 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:40:37 crc kubenswrapper[4792]: E0319 16:40:37.899582 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.996135 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.996620 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.996805 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.996678 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.996289 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.996891 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997234 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997252 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997266 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997300 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997332 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997365 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997389 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997395 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997411 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997445 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997427 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997468 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997451 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997436 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997545 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997591 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997643 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997735 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997772 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:37 crc kubenswrapper[4792]: I0319 16:40:37.997874 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.099688 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.100792 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.100902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.100931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.100982 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:40:38 crc kubenswrapper[4792]: E0319 16:40:38.101727 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.174121 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.196199 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.207214 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.213368 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:38 crc kubenswrapper[4792]: W0319 16:40:38.229284 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-68d05807a8f9a539355dca3575f7459ba836d29e90e08c1c9faa2a6ed2c1c499 WatchSource:0}: Error finding container 68d05807a8f9a539355dca3575f7459ba836d29e90e08c1c9faa2a6ed2c1c499: Status 404 returned error can't find the container with id 68d05807a8f9a539355dca3575f7459ba836d29e90e08c1c9faa2a6ed2c1c499 Mar 19 16:40:38 crc kubenswrapper[4792]: W0319 16:40:38.236312 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-743025f1418b627eb45aa273b271211e562330adaaf00e2c23acf8748fc77f26 WatchSource:0}: Error finding container 743025f1418b627eb45aa273b271211e562330adaaf00e2c23acf8748fc77f26: Status 404 returned error can't find the container with id 743025f1418b627eb45aa273b271211e562330adaaf00e2c23acf8748fc77f26 Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.239597 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:38 crc kubenswrapper[4792]: W0319 16:40:38.241761 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-01993b1b02e644c1819ff0a59b3fcb009ad644926acbc9cbba480060fbf13f77 WatchSource:0}: Error finding container 01993b1b02e644c1819ff0a59b3fcb009ad644926acbc9cbba480060fbf13f77: Status 404 returned error can't find the container with id 01993b1b02e644c1819ff0a59b3fcb009ad644926acbc9cbba480060fbf13f77 Mar 19 16:40:38 crc kubenswrapper[4792]: E0319 16:40:38.275822 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.502797 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.503987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.504042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.504062 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.504099 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:40:38 crc kubenswrapper[4792]: E0319 16:40:38.504600 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Mar 19 16:40:38 crc kubenswrapper[4792]: W0319 16:40:38.601707 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:38 crc kubenswrapper[4792]: E0319 16:40:38.601875 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.671901 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.744690 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"743025f1418b627eb45aa273b271211e562330adaaf00e2c23acf8748fc77f26"} Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.745881 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"68d05807a8f9a539355dca3575f7459ba836d29e90e08c1c9faa2a6ed2c1c499"} Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.747355 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"79aa9b752a2a9be614814aba8e3b1cc9bc99be15a9bec968a3fb39569b6e5ea3"} Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.749264 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6fa455b19eba25e14ed58770ade275ef10a0af6d87d69525b22b4701e2b47903"} Mar 19 16:40:38 crc kubenswrapper[4792]: I0319 16:40:38.753308 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"01993b1b02e644c1819ff0a59b3fcb009ad644926acbc9cbba480060fbf13f77"} Mar 19 16:40:39 crc kubenswrapper[4792]: W0319 16:40:39.011961 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:39 crc kubenswrapper[4792]: E0319 16:40:39.012083 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:39 crc kubenswrapper[4792]: E0319 16:40:39.076703 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Mar 19 16:40:39 crc kubenswrapper[4792]: W0319 16:40:39.076717 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:39 crc kubenswrapper[4792]: E0319 16:40:39.076889 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:39 crc kubenswrapper[4792]: W0319 16:40:39.273828 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:39 crc kubenswrapper[4792]: E0319 16:40:39.274002 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.305489 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.308216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.308275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.308292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.308329 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:40:39 crc kubenswrapper[4792]: E0319 16:40:39.309076 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.671721 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.702913 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 16:40:39 crc kubenswrapper[4792]: E0319 16:40:39.704005 4792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.758878 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2"} Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.758930 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951"} Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.758940 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143"} Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.758949 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e"} Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.758996 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.759999 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.760038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.760049 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.761292 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c" exitCode=0 Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.761346 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.761378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c"} Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.761946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.761991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.762001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.763862 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="49e82ecffe8f089bae9d973879c14aa87343495349cb411df59233ca83ffdd81" exitCode=0 Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.763943 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"49e82ecffe8f089bae9d973879c14aa87343495349cb411df59233ca83ffdd81"} Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.763965 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.764194 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.764925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.764956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.764968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.765616 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.765647 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.765677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.766157 4792 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f70560a54e70117810af4aa326e1649aac3f866ccd75f9a467d6355f67fa4f8d" exitCode=0 Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.766214 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f70560a54e70117810af4aa326e1649aac3f866ccd75f9a467d6355f67fa4f8d"} Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.766334 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.767750 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.767817 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.767883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.768442 4792 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733" exitCode=0 Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.768487 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733"} Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.768514 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.769805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.769917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.769938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:39 crc kubenswrapper[4792]: I0319 16:40:39.882636 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.671832 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:40 crc kubenswrapper[4792]: E0319 16:40:40.677740 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.773718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f4ffb102ca0b970820f5e520f2d4fd3c26e58dbeaff44ce5485d0269440f0071"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.773813 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.775248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.775309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.775327 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.778060 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.777879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.778133 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.778151 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.781437 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.781472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.781485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.784208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.784278 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.784301 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.784317 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.789103 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b405fc2d925ac270902be910042efe87f3e47c5cce752bb04b57484aff17b9b5" exitCode=0 Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.789260 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.789269 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b405fc2d925ac270902be910042efe87f3e47c5cce752bb04b57484aff17b9b5"} Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.789295 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.790514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.790556 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.790571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.790925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.790960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.790971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:40 crc kubenswrapper[4792]: W0319 16:40:40.811348 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:40 crc kubenswrapper[4792]: E0319 16:40:40.811440 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.909557 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.911381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.911435 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.911446 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:40 crc kubenswrapper[4792]: I0319 16:40:40.911477 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:40:40 crc kubenswrapper[4792]: E0319 16:40:40.911982 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Mar 19 16:40:41 crc kubenswrapper[4792]: W0319 16:40:41.006372 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Mar 19 16:40:41 crc kubenswrapper[4792]: E0319 16:40:41.006470 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.799362 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.799346 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1fcd9814d575a244e6944c528ab94ed94fc6facda17fd6daee146342ef51ab78"} Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.801746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.801807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.801856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.803956 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3f36b82b5e953ac3c8806e9565a36f3487d13cd94b1601282b7ba0de2f789232" exitCode=0 Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.804052 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.804120 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.804193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3f36b82b5e953ac3c8806e9565a36f3487d13cd94b1601282b7ba0de2f789232"} Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.804254 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.804290 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.804320 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.805213 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.805248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.805265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.806239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.806259 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.806296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.806296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.806320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.806333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.808414 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.808463 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.808476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:41 crc kubenswrapper[4792]: I0319 16:40:41.872929 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.367070 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.810656 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c729a4859bb54286f01a9a840f4227fcc3910f3e44c2967be4fec08e10f679f4"} Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.811108 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cee61d5a25a853c60c381c2487199133c54a0b638b86ff36de00eacea7223c60"} Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.810739 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.811139 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4484b69c21312b3be2793b1f0146c97f3f220990f691bf5c72e7040301fc48ee"} Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.811157 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"06a971fc04afd85666fdb8896af31c8843b71c2874dce9926e30412bbf930abc"} Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.810759 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.811177 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.812456 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.812493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.812503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.813428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.813457 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.813466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.856694 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.883078 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:40:42 crc kubenswrapper[4792]: I0319 16:40:42.883196 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.797751 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.818616 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4204b2817a908e1c39c947a4d0aaf4bc4b190069f888f806252005106307a1d6"} Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.818662 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.818772 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.820521 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.820553 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.820564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.820818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.820925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:43 crc kubenswrapper[4792]: I0319 16:40:43.821009 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.113012 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.114605 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.114662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.114677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.114709 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.441275 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.774929 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.775156 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.781767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.781834 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.781887 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.782417 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.821211 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.821331 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.821405 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.822103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.822142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.822156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.822920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.822946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.822957 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.823634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.823666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.823688 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:44 crc kubenswrapper[4792]: I0319 16:40:44.876213 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:45 crc kubenswrapper[4792]: I0319 16:40:45.824595 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:45 crc kubenswrapper[4792]: I0319 16:40:45.824884 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:45 crc kubenswrapper[4792]: I0319 16:40:45.825935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:45 crc kubenswrapper[4792]: I0319 16:40:45.825967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:45 crc kubenswrapper[4792]: I0319 16:40:45.825977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:45 crc kubenswrapper[4792]: I0319 16:40:45.826001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:45 crc kubenswrapper[4792]: I0319 16:40:45.826041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:45 crc kubenswrapper[4792]: I0319 16:40:45.826054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.284306 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.284559 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.286135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.286187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.286203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.794209 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.794454 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.796048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.796125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:46 crc kubenswrapper[4792]: I0319 16:40:46.796147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:47 crc kubenswrapper[4792]: E0319 16:40:47.811038 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:40:50 crc kubenswrapper[4792]: I0319 16:40:50.330329 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 19 16:40:50 crc kubenswrapper[4792]: I0319 16:40:50.330518 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:50 crc kubenswrapper[4792]: I0319 16:40:50.332115 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:50 crc kubenswrapper[4792]: I0319 16:40:50.332182 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:50 crc kubenswrapper[4792]: I0319 16:40:50.332202 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:51 crc kubenswrapper[4792]: W0319 16:40:51.499039 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.499137 4792 trace.go:236] Trace[31877347]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Mar-2026 16:40:41.496) (total time: 10002ms): Mar 19 16:40:51 crc kubenswrapper[4792]: Trace[31877347]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (16:40:51.499) Mar 19 16:40:51 crc kubenswrapper[4792]: Trace[31877347]: [10.00229793s] [10.00229793s] END Mar 19 16:40:51 crc kubenswrapper[4792]: E0319 16:40:51.499168 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.671911 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.842426 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.845109 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1fcd9814d575a244e6944c528ab94ed94fc6facda17fd6daee146342ef51ab78" exitCode=255 Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.845154 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1fcd9814d575a244e6944c528ab94ed94fc6facda17fd6daee146342ef51ab78"} Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.845269 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.846015 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.846071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.846091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.846918 4792 scope.go:117] "RemoveContainer" containerID="1fcd9814d575a244e6944c528ab94ed94fc6facda17fd6daee146342ef51ab78" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.882130 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.882312 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.883683 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.883729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:51 crc kubenswrapper[4792]: I0319 16:40:51.883746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:52 crc kubenswrapper[4792]: W0319 16:40:52.114554 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z Mar 19 16:40:52 crc kubenswrapper[4792]: E0319 16:40:52.114644 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:40:52 crc kubenswrapper[4792]: W0319 16:40:52.115593 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z Mar 19 16:40:52 crc kubenswrapper[4792]: E0319 16:40:52.115639 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:40:52 crc kubenswrapper[4792]: E0319 16:40:52.116461 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e4b9c3e8cc37e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,LastTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:40:52 crc kubenswrapper[4792]: E0319 16:40:52.127687 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z" node="crc" Mar 19 16:40:52 crc kubenswrapper[4792]: E0319 16:40:52.129013 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 19 16:40:52 crc kubenswrapper[4792]: W0319 16:40:52.132883 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z Mar 19 16:40:52 crc kubenswrapper[4792]: E0319 16:40:52.132949 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:40:52 crc kubenswrapper[4792]: E0319 16:40:52.136577 4792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.141604 4792 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.141653 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.154283 4792 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]log ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]etcd ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/generic-apiserver-start-informers ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/priority-and-fairness-filter ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/start-apiextensions-informers ok Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/crd-informer-synced failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/start-system-namespaces-controller ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/start-service-ip-repair-controllers failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/bootstrap-controller failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/start-kube-aggregator-informers ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 19 16:40:52 crc kubenswrapper[4792]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]autoregister-completion ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/apiservice-openapi-controller ok Mar 19 16:40:52 crc kubenswrapper[4792]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 19 16:40:52 crc kubenswrapper[4792]: livez check failed Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.154341 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.674266 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:52Z is after 2026-02-23T05:33:13Z Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.849780 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.851630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579"} Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.851829 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.852545 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.852573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.852582 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.856943 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.884262 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:40:52 crc kubenswrapper[4792]: I0319 16:40:52.884336 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.675136 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:53Z is after 2026-02-23T05:33:13Z Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.856180 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.856822 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.859389 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579" exitCode=255 Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.859424 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579"} Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.859471 4792 scope.go:117] "RemoveContainer" containerID="1fcd9814d575a244e6944c528ab94ed94fc6facda17fd6daee146342ef51ab78" Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.859589 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.860752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.860782 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.860815 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:53 crc kubenswrapper[4792]: I0319 16:40:53.861285 4792 scope.go:117] "RemoveContainer" containerID="b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579" Mar 19 16:40:53 crc kubenswrapper[4792]: E0319 16:40:53.861446 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.442522 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.675496 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:54Z is after 2026-02-23T05:33:13Z Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.863986 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.866445 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.867752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.868041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.868235 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.869543 4792 scope.go:117] "RemoveContainer" containerID="b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579" Mar 19 16:40:54 crc kubenswrapper[4792]: E0319 16:40:54.870089 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:40:54 crc kubenswrapper[4792]: I0319 16:40:54.873041 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:55 crc kubenswrapper[4792]: I0319 16:40:55.675488 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:55Z is after 2026-02-23T05:33:13Z Mar 19 16:40:55 crc kubenswrapper[4792]: I0319 16:40:55.868770 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:55 crc kubenswrapper[4792]: I0319 16:40:55.874248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:55 crc kubenswrapper[4792]: I0319 16:40:55.874320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:55 crc kubenswrapper[4792]: I0319 16:40:55.874344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:55 crc kubenswrapper[4792]: I0319 16:40:55.875328 4792 scope.go:117] "RemoveContainer" containerID="b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579" Mar 19 16:40:55 crc kubenswrapper[4792]: E0319 16:40:55.875631 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:40:56 crc kubenswrapper[4792]: W0319 16:40:56.442797 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:56Z is after 2026-02-23T05:33:13Z Mar 19 16:40:56 crc kubenswrapper[4792]: E0319 16:40:56.442913 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:40:56 crc kubenswrapper[4792]: I0319 16:40:56.613990 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:40:56 crc kubenswrapper[4792]: I0319 16:40:56.676005 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:56Z is after 2026-02-23T05:33:13Z Mar 19 16:40:56 crc kubenswrapper[4792]: I0319 16:40:56.871651 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:56 crc kubenswrapper[4792]: I0319 16:40:56.872795 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:56 crc kubenswrapper[4792]: I0319 16:40:56.872823 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:56 crc kubenswrapper[4792]: I0319 16:40:56.872831 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:56 crc kubenswrapper[4792]: I0319 16:40:56.873333 4792 scope.go:117] "RemoveContainer" containerID="b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579" Mar 19 16:40:56 crc kubenswrapper[4792]: E0319 16:40:56.873471 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:40:57 crc kubenswrapper[4792]: W0319 16:40:57.017352 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:57Z is after 2026-02-23T05:33:13Z Mar 19 16:40:57 crc kubenswrapper[4792]: E0319 16:40:57.017432 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:40:57 crc kubenswrapper[4792]: I0319 16:40:57.675506 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:57Z is after 2026-02-23T05:33:13Z Mar 19 16:40:57 crc kubenswrapper[4792]: E0319 16:40:57.811749 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:40:57 crc kubenswrapper[4792]: I0319 16:40:57.873615 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:57 crc kubenswrapper[4792]: I0319 16:40:57.874730 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:57 crc kubenswrapper[4792]: I0319 16:40:57.874783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:57 crc kubenswrapper[4792]: I0319 16:40:57.874801 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:57 crc kubenswrapper[4792]: I0319 16:40:57.875629 4792 scope.go:117] "RemoveContainer" containerID="b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579" Mar 19 16:40:57 crc kubenswrapper[4792]: E0319 16:40:57.875945 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:40:58 crc kubenswrapper[4792]: I0319 16:40:58.527895 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:40:58 crc kubenswrapper[4792]: I0319 16:40:58.529426 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:40:58 crc kubenswrapper[4792]: I0319 16:40:58.529493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:40:58 crc kubenswrapper[4792]: I0319 16:40:58.529517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:40:58 crc kubenswrapper[4792]: I0319 16:40:58.529559 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:40:58 crc kubenswrapper[4792]: E0319 16:40:58.532062 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:58Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 19 16:40:58 crc kubenswrapper[4792]: E0319 16:40:58.534244 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:58Z is after 2026-02-23T05:33:13Z" node="crc" Mar 19 16:40:58 crc kubenswrapper[4792]: I0319 16:40:58.675648 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:58Z is after 2026-02-23T05:33:13Z Mar 19 16:40:59 crc kubenswrapper[4792]: I0319 16:40:59.673765 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:40:59Z is after 2026-02-23T05:33:13Z Mar 19 16:41:00 crc kubenswrapper[4792]: W0319 16:41:00.164745 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:00Z is after 2026-02-23T05:33:13Z Mar 19 16:41:00 crc kubenswrapper[4792]: E0319 16:41:00.164829 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.365027 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.365277 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.370500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.370577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.370606 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.380960 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.560865 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 16:41:00 crc kubenswrapper[4792]: E0319 16:41:00.565938 4792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.673634 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:00Z is after 2026-02-23T05:33:13Z Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.882100 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.883346 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.883392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:00 crc kubenswrapper[4792]: I0319 16:41:00.883405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:01 crc kubenswrapper[4792]: I0319 16:41:01.674140 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:01Z is after 2026-02-23T05:33:13Z Mar 19 16:41:02 crc kubenswrapper[4792]: E0319 16:41:02.121372 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e4b9c3e8cc37e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,LastTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.676686 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.883775 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.883921 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.884031 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.884242 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.885902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.885942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.885960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.886635 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 19 16:41:02 crc kubenswrapper[4792]: I0319 16:41:02.886945 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143" gracePeriod=30 Mar 19 16:41:02 crc kubenswrapper[4792]: W0319 16:41:02.983350 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z Mar 19 16:41:02 crc kubenswrapper[4792]: E0319 16:41:02.983495 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.674134 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:03Z is after 2026-02-23T05:33:13Z Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.894078 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.894731 4792 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143" exitCode=255 Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.894801 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143"} Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.894879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837"} Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.895050 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.897204 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.897313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:03 crc kubenswrapper[4792]: I0319 16:41:03.897343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:04 crc kubenswrapper[4792]: I0319 16:41:04.675963 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:04Z is after 2026-02-23T05:33:13Z Mar 19 16:41:04 crc kubenswrapper[4792]: I0319 16:41:04.877250 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:41:04 crc kubenswrapper[4792]: W0319 16:41:04.893597 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:04Z is after 2026-02-23T05:33:13Z Mar 19 16:41:04 crc kubenswrapper[4792]: E0319 16:41:04.893703 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:41:04 crc kubenswrapper[4792]: I0319 16:41:04.897244 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:04 crc kubenswrapper[4792]: I0319 16:41:04.898471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:04 crc kubenswrapper[4792]: I0319 16:41:04.898739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:04 crc kubenswrapper[4792]: I0319 16:41:04.898913 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:05 crc kubenswrapper[4792]: I0319 16:41:05.534427 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:05 crc kubenswrapper[4792]: I0319 16:41:05.538004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:05 crc kubenswrapper[4792]: I0319 16:41:05.538079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:05 crc kubenswrapper[4792]: I0319 16:41:05.538105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:05 crc kubenswrapper[4792]: I0319 16:41:05.538148 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:41:05 crc kubenswrapper[4792]: E0319 16:41:05.541343 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:05Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 19 16:41:05 crc kubenswrapper[4792]: E0319 16:41:05.544066 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:05Z is after 2026-02-23T05:33:13Z" node="crc" Mar 19 16:41:05 crc kubenswrapper[4792]: I0319 16:41:05.676760 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:05Z is after 2026-02-23T05:33:13Z Mar 19 16:41:06 crc kubenswrapper[4792]: I0319 16:41:06.679032 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:06Z is after 2026-02-23T05:33:13Z Mar 19 16:41:07 crc kubenswrapper[4792]: I0319 16:41:07.679086 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:07Z is after 2026-02-23T05:33:13Z Mar 19 16:41:07 crc kubenswrapper[4792]: E0319 16:41:07.811895 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:41:08 crc kubenswrapper[4792]: I0319 16:41:08.674389 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:08Z is after 2026-02-23T05:33:13Z Mar 19 16:41:09 crc kubenswrapper[4792]: W0319 16:41:09.622219 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:09Z is after 2026-02-23T05:33:13Z Mar 19 16:41:09 crc kubenswrapper[4792]: E0319 16:41:09.622299 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:41:09 crc kubenswrapper[4792]: I0319 16:41:09.673732 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:09Z is after 2026-02-23T05:33:13Z Mar 19 16:41:09 crc kubenswrapper[4792]: I0319 16:41:09.882612 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:41:09 crc kubenswrapper[4792]: I0319 16:41:09.882883 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:09 crc kubenswrapper[4792]: I0319 16:41:09.884354 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:09 crc kubenswrapper[4792]: I0319 16:41:09.884470 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:09 crc kubenswrapper[4792]: I0319 16:41:09.884490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:10 crc kubenswrapper[4792]: I0319 16:41:10.675941 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:10Z is after 2026-02-23T05:33:13Z Mar 19 16:41:11 crc kubenswrapper[4792]: I0319 16:41:11.678381 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:11Z is after 2026-02-23T05:33:13Z Mar 19 16:41:11 crc kubenswrapper[4792]: I0319 16:41:11.739219 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:11 crc kubenswrapper[4792]: I0319 16:41:11.740534 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:11 crc kubenswrapper[4792]: I0319 16:41:11.740642 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:11 crc kubenswrapper[4792]: I0319 16:41:11.740711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:11 crc kubenswrapper[4792]: I0319 16:41:11.741339 4792 scope.go:117] "RemoveContainer" containerID="b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579" Mar 19 16:41:12 crc kubenswrapper[4792]: E0319 16:41:12.125891 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:12Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e4b9c3e8cc37e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,LastTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.544599 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:12 crc kubenswrapper[4792]: E0319 16:41:12.545772 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:12Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.546056 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.546092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.546101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.546122 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:41:12 crc kubenswrapper[4792]: E0319 16:41:12.551040 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:12Z is after 2026-02-23T05:33:13Z" node="crc" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.673697 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:12Z is after 2026-02-23T05:33:13Z Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.882801 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.882940 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.922316 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.924676 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8dc065b903e610b6a6081eab1a4ab461e6483c55f5d183ad85073a37fce5e3cb"} Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.924925 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.926065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.926103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:12 crc kubenswrapper[4792]: I0319 16:41:12.926119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.675906 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:13Z is after 2026-02-23T05:33:13Z Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.929171 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.930100 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.932723 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8dc065b903e610b6a6081eab1a4ab461e6483c55f5d183ad85073a37fce5e3cb" exitCode=255 Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.932778 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8dc065b903e610b6a6081eab1a4ab461e6483c55f5d183ad85073a37fce5e3cb"} Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.932825 4792 scope.go:117] "RemoveContainer" containerID="b1dd5e2aaf4f804e84f9bd7353cb8934f6e7e9a55d6a3f76c961a82a43c61579" Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.933125 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.934453 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.934510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.934529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:13 crc kubenswrapper[4792]: I0319 16:41:13.935513 4792 scope.go:117] "RemoveContainer" containerID="8dc065b903e610b6a6081eab1a4ab461e6483c55f5d183ad85073a37fce5e3cb" Mar 19 16:41:13 crc kubenswrapper[4792]: E0319 16:41:13.935898 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:41:14 crc kubenswrapper[4792]: I0319 16:41:14.675381 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:14Z is after 2026-02-23T05:33:13Z Mar 19 16:41:14 crc kubenswrapper[4792]: I0319 16:41:14.938377 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 19 16:41:15 crc kubenswrapper[4792]: I0319 16:41:15.676462 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:15Z is after 2026-02-23T05:33:13Z Mar 19 16:41:16 crc kubenswrapper[4792]: I0319 16:41:16.613166 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:41:16 crc kubenswrapper[4792]: I0319 16:41:16.613428 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:16 crc kubenswrapper[4792]: I0319 16:41:16.615067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:16 crc kubenswrapper[4792]: I0319 16:41:16.615109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:16 crc kubenswrapper[4792]: I0319 16:41:16.615124 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:16 crc kubenswrapper[4792]: I0319 16:41:16.615727 4792 scope.go:117] "RemoveContainer" containerID="8dc065b903e610b6a6081eab1a4ab461e6483c55f5d183ad85073a37fce5e3cb" Mar 19 16:41:16 crc kubenswrapper[4792]: E0319 16:41:16.615981 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:41:16 crc kubenswrapper[4792]: I0319 16:41:16.675429 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:16Z is after 2026-02-23T05:33:13Z Mar 19 16:41:16 crc kubenswrapper[4792]: I0319 16:41:16.784823 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 16:41:16 crc kubenswrapper[4792]: E0319 16:41:16.792560 4792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:41:16 crc kubenswrapper[4792]: E0319 16:41:16.793909 4792 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 19 16:41:17 crc kubenswrapper[4792]: I0319 16:41:17.674502 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:17Z is after 2026-02-23T05:33:13Z Mar 19 16:41:17 crc kubenswrapper[4792]: W0319 16:41:17.686596 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:17Z is after 2026-02-23T05:33:13Z Mar 19 16:41:17 crc kubenswrapper[4792]: E0319 16:41:17.686714 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 19 16:41:17 crc kubenswrapper[4792]: E0319 16:41:17.812104 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:41:18 crc kubenswrapper[4792]: I0319 16:41:18.676906 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:18Z is after 2026-02-23T05:33:13Z Mar 19 16:41:19 crc kubenswrapper[4792]: E0319 16:41:19.548927 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:19Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 19 16:41:19 crc kubenswrapper[4792]: I0319 16:41:19.551989 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:19 crc kubenswrapper[4792]: I0319 16:41:19.553244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:19 crc kubenswrapper[4792]: I0319 16:41:19.553291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:19 crc kubenswrapper[4792]: I0319 16:41:19.553307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:19 crc kubenswrapper[4792]: I0319 16:41:19.553336 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:41:19 crc kubenswrapper[4792]: E0319 16:41:19.556330 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:19Z is after 2026-02-23T05:33:13Z" node="crc" Mar 19 16:41:19 crc kubenswrapper[4792]: I0319 16:41:19.675069 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:19Z is after 2026-02-23T05:33:13Z Mar 19 16:41:20 crc kubenswrapper[4792]: W0319 16:41:20.600860 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 19 16:41:20 crc kubenswrapper[4792]: E0319 16:41:20.601457 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 16:41:20 crc kubenswrapper[4792]: I0319 16:41:20.675586 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:21 crc kubenswrapper[4792]: I0319 16:41:21.676640 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.133678 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c3e8cc37e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,LastTimestamp:2026-03-19 16:40:37.666947966 +0000 UTC m=+0.813005506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.140028 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42464f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,LastTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.147114 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42487c33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,LastTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.155082 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c424ad76f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729736559 +0000 UTC m=+0.875794139,LastTimestamp:2026-03-19 16:40:37.729736559 +0000 UTC m=+0.875794139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.163052 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c467dd003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.800185859 +0000 UTC m=+0.946243399,LastTimestamp:2026-03-19 16:40:37.800185859 +0000 UTC m=+0.946243399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.170676 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42464f91\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42464f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,LastTimestamp:2026-03-19 16:40:37.841486493 +0000 UTC m=+0.987544033,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.178656 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42487c33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42487c33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,LastTimestamp:2026-03-19 16:40:37.841508705 +0000 UTC m=+0.987566245,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.182915 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c424ad76f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c424ad76f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729736559 +0000 UTC m=+0.875794139,LastTimestamp:2026-03-19 16:40:37.841518415 +0000 UTC m=+0.987575955,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.188903 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42464f91\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42464f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,LastTimestamp:2026-03-19 16:40:37.842621895 +0000 UTC m=+0.988679435,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.194528 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42487c33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42487c33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,LastTimestamp:2026-03-19 16:40:37.842638856 +0000 UTC m=+0.988696396,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.202206 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c424ad76f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c424ad76f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729736559 +0000 UTC m=+0.875794139,LastTimestamp:2026-03-19 16:40:37.842650437 +0000 UTC m=+0.988707967,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.211565 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42464f91\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42464f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,LastTimestamp:2026-03-19 16:40:37.84344659 +0000 UTC m=+0.989504140,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.218640 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42487c33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42487c33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,LastTimestamp:2026-03-19 16:40:37.843468662 +0000 UTC m=+0.989526202,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.224349 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c424ad76f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c424ad76f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729736559 +0000 UTC m=+0.875794139,LastTimestamp:2026-03-19 16:40:37.843480492 +0000 UTC m=+0.989538032,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.231356 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42464f91\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42464f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,LastTimestamp:2026-03-19 16:40:37.843494863 +0000 UTC m=+0.989552403,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.235027 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42487c33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42487c33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,LastTimestamp:2026-03-19 16:40:37.843507304 +0000 UTC m=+0.989564844,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.239645 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c424ad76f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c424ad76f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729736559 +0000 UTC m=+0.875794139,LastTimestamp:2026-03-19 16:40:37.843515984 +0000 UTC m=+0.989573524,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.243023 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42464f91\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42464f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,LastTimestamp:2026-03-19 16:40:37.84418551 +0000 UTC m=+0.990243050,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.249763 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42487c33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42487c33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,LastTimestamp:2026-03-19 16:40:37.844199311 +0000 UTC m=+0.990256851,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.255050 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c424ad76f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c424ad76f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729736559 +0000 UTC m=+0.875794139,LastTimestamp:2026-03-19 16:40:37.844207811 +0000 UTC m=+0.990265351,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.262228 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42464f91\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42464f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,LastTimestamp:2026-03-19 16:40:37.844407653 +0000 UTC m=+0.990465193,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.269073 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42487c33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42487c33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,LastTimestamp:2026-03-19 16:40:37.844417984 +0000 UTC m=+0.990475524,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.277071 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c424ad76f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c424ad76f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729736559 +0000 UTC m=+0.875794139,LastTimestamp:2026-03-19 16:40:37.844428204 +0000 UTC m=+0.990485744,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.281933 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42464f91\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42464f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729439633 +0000 UTC m=+0.875497223,LastTimestamp:2026-03-19 16:40:37.844555941 +0000 UTC m=+0.990613481,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.286439 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e4b9c42487c33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e4b9c42487c33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:37.729582131 +0000 UTC m=+0.875639781,LastTimestamp:2026-03-19 16:40:37.844568382 +0000 UTC m=+0.990625922,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.291533 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9c606ca9f4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.23526962 +0000 UTC m=+1.381327160,LastTimestamp:2026-03-19 16:40:38.23526962 +0000 UTC m=+1.381327160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.295661 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e4b9c6076cde2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.235934178 +0000 UTC m=+1.381991718,LastTimestamp:2026-03-19 16:40:38.235934178 +0000 UTC m=+1.381991718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.303126 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9c609f0216 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.238568982 +0000 UTC m=+1.384626532,LastTimestamp:2026-03-19 16:40:38.238568982 +0000 UTC m=+1.384626532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.307117 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c60fefb99 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.244858777 +0000 UTC m=+1.390916317,LastTimestamp:2026-03-19 16:40:38.244858777 +0000 UTC m=+1.390916317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.312908 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9c62bfbe22 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.274268706 +0000 UTC m=+1.420326256,LastTimestamp:2026-03-19 16:40:38.274268706 +0000 UTC m=+1.420326256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.319034 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9c85d7c3d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.863045588 +0000 UTC m=+2.009103128,LastTimestamp:2026-03-19 16:40:38.863045588 +0000 UTC m=+2.009103128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.322883 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c85e2784c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.863747148 +0000 UTC m=+2.009804678,LastTimestamp:2026-03-19 16:40:38.863747148 +0000 UTC m=+2.009804678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.329149 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9c85f6aca0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.865071264 +0000 UTC m=+2.011128814,LastTimestamp:2026-03-19 16:40:38.865071264 +0000 UTC m=+2.011128814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.335100 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9c865ab0c2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.871625922 +0000 UTC m=+2.017683462,LastTimestamp:2026-03-19 16:40:38.871625922 +0000 UTC m=+2.017683462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.342380 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e4b9c8667ce46 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.872485446 +0000 UTC m=+2.018542986,LastTimestamp:2026-03-19 16:40:38.872485446 +0000 UTC m=+2.018542986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.347402 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9c86823b6a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.874217322 +0000 UTC m=+2.020274862,LastTimestamp:2026-03-19 16:40:38.874217322 +0000 UTC m=+2.020274862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.353617 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c869ece4d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.876089933 +0000 UTC m=+2.022147473,LastTimestamp:2026-03-19 16:40:38.876089933 +0000 UTC m=+2.022147473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.358506 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c86b0d149 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.877270345 +0000 UTC m=+2.023327885,LastTimestamp:2026-03-19 16:40:38.877270345 +0000 UTC m=+2.023327885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.363893 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9c86e564e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.880716 +0000 UTC m=+2.026773530,LastTimestamp:2026-03-19 16:40:38.880716 +0000 UTC m=+2.026773530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.370310 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e4b9c8779d62e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.890444334 +0000 UTC m=+2.036501874,LastTimestamp:2026-03-19 16:40:38.890444334 +0000 UTC m=+2.036501874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.375812 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9c8795ab06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.892268294 +0000 UTC m=+2.038325834,LastTimestamp:2026-03-19 16:40:38.892268294 +0000 UTC m=+2.038325834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.381705 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c9a4432f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.205696242 +0000 UTC m=+2.351753812,LastTimestamp:2026-03-19 16:40:39.205696242 +0000 UTC m=+2.351753812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.385932 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c9af968cd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.217572045 +0000 UTC m=+2.363629585,LastTimestamp:2026-03-19 16:40:39.217572045 +0000 UTC m=+2.363629585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.390720 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c9b12daa6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.21923959 +0000 UTC m=+2.365297140,LastTimestamp:2026-03-19 16:40:39.21923959 +0000 UTC m=+2.365297140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.395624 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9ca95cfadc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.458978524 +0000 UTC m=+2.605036064,LastTimestamp:2026-03-19 16:40:39.458978524 +0000 UTC m=+2.605036064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.400583 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9caa552ccc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.475244236 +0000 UTC m=+2.621301776,LastTimestamp:2026-03-19 16:40:39.475244236 +0000 UTC m=+2.621301776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.404767 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9caa69be54 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.476592212 +0000 UTC m=+2.622649782,LastTimestamp:2026-03-19 16:40:39.476592212 +0000 UTC m=+2.622649782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.410008 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9cb58cf9f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.663450617 +0000 UTC m=+2.809508157,LastTimestamp:2026-03-19 16:40:39.663450617 +0000 UTC m=+2.809508157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.415620 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9cb6559f47 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.676600135 +0000 UTC m=+2.822657675,LastTimestamp:2026-03-19 16:40:39.676600135 +0000 UTC m=+2.822657675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.420440 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cbb89784d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.763884109 +0000 UTC m=+2.909941649,LastTimestamp:2026-03-19 16:40:39.763884109 +0000 UTC m=+2.909941649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.426675 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9cbbbbf472 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.76719269 +0000 UTC m=+2.913250230,LastTimestamp:2026-03-19 16:40:39.76719269 +0000 UTC m=+2.913250230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.433052 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e4b9cbbfff85d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.771650141 +0000 UTC m=+2.917707681,LastTimestamp:2026-03-19 16:40:39.771650141 +0000 UTC m=+2.917707681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.438986 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9cbc1ab3de openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.773402078 +0000 UTC m=+2.919459618,LastTimestamp:2026-03-19 16:40:39.773402078 +0000 UTC m=+2.919459618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.444492 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e4b9cc8a70af7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.983926007 +0000 UTC m=+3.129983547,LastTimestamp:2026-03-19 16:40:39.983926007 +0000 UTC m=+3.129983547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.449140 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9cc8a86db4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.98401682 +0000 UTC m=+3.130074360,LastTimestamp:2026-03-19 16:40:39.98401682 +0000 UTC m=+3.130074360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.453829 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9cc9855e1b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.998496283 +0000 UTC m=+3.144553823,LastTimestamp:2026-03-19 16:40:39.998496283 +0000 UTC m=+3.144553823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.457774 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9cc995b15d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.999566173 +0000 UTC m=+3.145623713,LastTimestamp:2026-03-19 16:40:39.999566173 +0000 UTC m=+3.145623713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.463154 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e4b9cc9ad5ef9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.001117945 +0000 UTC m=+3.147175485,LastTimestamp:2026-03-19 16:40:40.001117945 +0000 UTC m=+3.147175485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.469686 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9cc9c49a0c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.002640396 +0000 UTC m=+3.148697926,LastTimestamp:2026-03-19 16:40:40.002640396 +0000 UTC m=+3.148697926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.473744 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cca303566 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.009692518 +0000 UTC m=+3.155750058,LastTimestamp:2026-03-19 16:40:40.009692518 +0000 UTC m=+3.155750058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.477471 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9ccb21d86d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.025528429 +0000 UTC m=+3.171585969,LastTimestamp:2026-03-19 16:40:40.025528429 +0000 UTC m=+3.171585969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.483354 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9ccb436b9f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.027728799 +0000 UTC m=+3.173786339,LastTimestamp:2026-03-19 16:40:40.027728799 +0000 UTC m=+3.173786339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.487306 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9ccb55737d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.028910461 +0000 UTC m=+3.174967991,LastTimestamp:2026-03-19 16:40:40.028910461 +0000 UTC m=+3.174967991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.492954 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9cd5d67bda openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.205138906 +0000 UTC m=+3.351196446,LastTimestamp:2026-03-19 16:40:40.205138906 +0000 UTC m=+3.351196446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.497640 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cd6c3b411 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.220685329 +0000 UTC m=+3.366742869,LastTimestamp:2026-03-19 16:40:40.220685329 +0000 UTC m=+3.366742869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.504970 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9cd6cbadf5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.221208053 +0000 UTC m=+3.367265593,LastTimestamp:2026-03-19 16:40:40.221208053 +0000 UTC m=+3.367265593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.511869 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9cd6db3511 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.222225681 +0000 UTC m=+3.368283221,LastTimestamp:2026-03-19 16:40:40.222225681 +0000 UTC m=+3.368283221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.516698 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cd8b0f8f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.253012209 +0000 UTC m=+3.399069749,LastTimestamp:2026-03-19 16:40:40.253012209 +0000 UTC m=+3.399069749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.523439 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cd8c32073 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.254201971 +0000 UTC m=+3.400259511,LastTimestamp:2026-03-19 16:40:40.254201971 +0000 UTC m=+3.400259511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.533480 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9ce2dbbae2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.42358653 +0000 UTC m=+3.569644080,LastTimestamp:2026-03-19 16:40:40.42358653 +0000 UTC m=+3.569644080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.540824 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9ce30560b3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.426315955 +0000 UTC m=+3.572373495,LastTimestamp:2026-03-19 16:40:40.426315955 +0000 UTC m=+3.572373495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.554013 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e4b9ce384a4ce openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.434656462 +0000 UTC m=+3.580714002,LastTimestamp:2026-03-19 16:40:40.434656462 +0000 UTC m=+3.580714002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.560512 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9ce4a8ca38 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.453802552 +0000 UTC m=+3.599860092,LastTimestamp:2026-03-19 16:40:40.453802552 +0000 UTC m=+3.599860092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.568606 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9ce4d19e39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.456478265 +0000 UTC m=+3.602535805,LastTimestamp:2026-03-19 16:40:40.456478265 +0000 UTC m=+3.602535805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.575247 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cef25f948 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.62977876 +0000 UTC m=+3.775836300,LastTimestamp:2026-03-19 16:40:40.62977876 +0000 UTC m=+3.775836300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.580551 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cefef45bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.642971069 +0000 UTC m=+3.789028609,LastTimestamp:2026-03-19 16:40:40.642971069 +0000 UTC m=+3.789028609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.588280 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cf00f3ba4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.645065636 +0000 UTC m=+3.791123176,LastTimestamp:2026-03-19 16:40:40.645065636 +0000 UTC m=+3.791123176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.596596 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9cf8dc02cd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.792703693 +0000 UTC m=+3.938761243,LastTimestamp:2026-03-19 16:40:40.792703693 +0000 UTC m=+3.938761243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.605662 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cfaec78f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.827336952 +0000 UTC m=+3.973394492,LastTimestamp:2026-03-19 16:40:40.827336952 +0000 UTC m=+3.973394492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.613425 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cfbb0ce10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.840203792 +0000 UTC m=+3.986261332,LastTimestamp:2026-03-19 16:40:40.840203792 +0000 UTC m=+3.986261332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.619344 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d05f31857 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:41.012320343 +0000 UTC m=+4.158377883,LastTimestamp:2026-03-19 16:40:41.012320343 +0000 UTC m=+4.158377883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.626569 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d06c3d878 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:41.026001016 +0000 UTC m=+4.172058556,LastTimestamp:2026-03-19 16:40:41.026001016 +0000 UTC m=+4.172058556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.634825 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d354cb90f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:41.806723343 +0000 UTC m=+4.952780933,LastTimestamp:2026-03-19 16:40:41.806723343 +0000 UTC m=+4.952780933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.643482 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d423fea93 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.023987859 +0000 UTC m=+5.170045409,LastTimestamp:2026-03-19 16:40:42.023987859 +0000 UTC m=+5.170045409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.651095 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d42e260b8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.034634936 +0000 UTC m=+5.180692476,LastTimestamp:2026-03-19 16:40:42.034634936 +0000 UTC m=+5.180692476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.655644 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d42fb295a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.036259162 +0000 UTC m=+5.182316742,LastTimestamp:2026-03-19 16:40:42.036259162 +0000 UTC m=+5.182316742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.662452 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d4f49bcd3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.242735315 +0000 UTC m=+5.388792895,LastTimestamp:2026-03-19 16:40:42.242735315 +0000 UTC m=+5.388792895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.669106 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d4fff8b7e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.254650238 +0000 UTC m=+5.400707818,LastTimestamp:2026-03-19 16:40:42.254650238 +0000 UTC m=+5.400707818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.675461 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d50174d76 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.256207222 +0000 UTC m=+5.402264772,LastTimestamp:2026-03-19 16:40:42.256207222 +0000 UTC m=+5.402264772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.675716 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.679403 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d5e1a4530 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.491282736 +0000 UTC m=+5.637340286,LastTimestamp:2026-03-19 16:40:42.491282736 +0000 UTC m=+5.637340286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.683833 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d5f03d1f9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.506588665 +0000 UTC m=+5.652646245,LastTimestamp:2026-03-19 16:40:42.506588665 +0000 UTC m=+5.652646245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.685428 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d5f14b48e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.507695246 +0000 UTC m=+5.653752826,LastTimestamp:2026-03-19 16:40:42.507695246 +0000 UTC m=+5.653752826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.692282 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d6b90ae8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.717146763 +0000 UTC m=+5.863204303,LastTimestamp:2026-03-19 16:40:42.717146763 +0000 UTC m=+5.863204303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.697551 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d6c18ea67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.726074983 +0000 UTC m=+5.872132523,LastTimestamp:2026-03-19 16:40:42.726074983 +0000 UTC m=+5.872132523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.704621 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d6c27bada openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.72704585 +0000 UTC m=+5.873103390,LastTimestamp:2026-03-19 16:40:42.72704585 +0000 UTC m=+5.873103390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.711306 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 19 16:41:22 crc kubenswrapper[4792]: &Event{ObjectMeta:{kube-controller-manager-crc.189e4b9d7575f572 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 19 16:41:22 crc kubenswrapper[4792]: body: Mar 19 16:41:22 crc kubenswrapper[4792]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.883167602 +0000 UTC m=+6.029225172,LastTimestamp:2026-03-19 16:40:42.883167602 +0000 UTC m=+6.029225172,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 19 16:41:22 crc kubenswrapper[4792]: > Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.716158 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9d757705fe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.883237374 +0000 UTC m=+6.029294944,LastTimestamp:2026-03-19 16:40:42.883237374 +0000 UTC m=+6.029294944,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.723438 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d77c091aa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.92161169 +0000 UTC m=+6.067669240,LastTimestamp:2026-03-19 16:40:42.92161169 +0000 UTC m=+6.067669240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.730365 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e4b9d7856c1bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:42.931454395 +0000 UTC m=+6.077511975,LastTimestamp:2026-03-19 16:40:42.931454395 +0000 UTC m=+6.077511975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.742438 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e4b9cf00f3ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cf00f3ba4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.645065636 +0000 UTC m=+3.791123176,LastTimestamp:2026-03-19 16:40:51.848253171 +0000 UTC m=+14.994310711,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.751985 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e4b9cfaec78f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cfaec78f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.827336952 +0000 UTC m=+3.973394492,LastTimestamp:2026-03-19 16:40:52.120130176 +0000 UTC m=+15.266187726,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.758490 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e4b9cfbb0ce10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9cfbb0ce10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:40.840203792 +0000 UTC m=+3.986261332,LastTimestamp:2026-03-19 16:40:52.129940881 +0000 UTC m=+15.275998421,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.763990 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 19 16:41:22 crc kubenswrapper[4792]: &Event{ObjectMeta:{kube-apiserver-crc.189e4b9f9d4efae9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 19 16:41:22 crc kubenswrapper[4792]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 19 16:41:22 crc kubenswrapper[4792]: Mar 19 16:41:22 crc kubenswrapper[4792]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.141636329 +0000 UTC m=+15.287693869,LastTimestamp:2026-03-19 16:40:52.141636329 +0000 UTC m=+15.287693869,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 19 16:41:22 crc kubenswrapper[4792]: > Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.770203 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9f9d4fef0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.141698831 +0000 UTC m=+15.287756371,LastTimestamp:2026-03-19 16:40:52.141698831 +0000 UTC m=+15.287756371,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.777170 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 19 16:41:22 crc kubenswrapper[4792]: &Event{ObjectMeta:{kube-apiserver-crc.189e4b9f9e1099f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 19 16:41:22 crc kubenswrapper[4792]: body: [+]ping ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]log ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]etcd ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/generic-apiserver-start-informers ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/priority-and-fairness-filter ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/start-apiextensions-informers ok Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/crd-informer-synced failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/start-system-namespaces-controller ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/start-service-ip-repair-controllers failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/bootstrap-controller failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/start-kube-aggregator-informers ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 19 16:41:22 crc kubenswrapper[4792]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]autoregister-completion ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/apiservice-openapi-controller ok Mar 19 16:41:22 crc kubenswrapper[4792]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 19 16:41:22 crc kubenswrapper[4792]: livez check failed Mar 19 16:41:22 crc kubenswrapper[4792]: Mar 19 16:41:22 crc kubenswrapper[4792]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.154325494 +0000 UTC m=+15.300383034,LastTimestamp:2026-03-19 16:40:52.154325494 +0000 UTC m=+15.300383034,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 19 16:41:22 crc kubenswrapper[4792]: > Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.782999 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e4b9f9e11290d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.154362125 +0000 UTC m=+15.300419665,LastTimestamp:2026-03-19 16:40:52.154362125 +0000 UTC m=+15.300419665,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.788300 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 19 16:41:22 crc kubenswrapper[4792]: &Event{ObjectMeta:{kube-controller-manager-crc.189e4b9fc9935808 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 19 16:41:22 crc kubenswrapper[4792]: body: Mar 19 16:41:22 crc kubenswrapper[4792]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.88431412 +0000 UTC m=+16.030371660,LastTimestamp:2026-03-19 16:40:52.88431412 +0000 UTC m=+16.030371660,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 19 16:41:22 crc kubenswrapper[4792]: > Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.791261 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9fc9940975 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.884359541 +0000 UTC m=+16.030417081,LastTimestamp:2026-03-19 16:40:52.884359541 +0000 UTC m=+16.030417081,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.795078 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e4b9fc9935808\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 19 16:41:22 crc kubenswrapper[4792]: &Event{ObjectMeta:{kube-controller-manager-crc.189e4b9fc9935808 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 19 16:41:22 crc kubenswrapper[4792]: body: Mar 19 16:41:22 crc kubenswrapper[4792]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.88431412 +0000 UTC m=+16.030371660,LastTimestamp:2026-03-19 16:41:02.883877344 +0000 UTC m=+26.029934924,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 19 16:41:22 crc kubenswrapper[4792]: > Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.801457 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e4b9fc9940975\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9fc9940975 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.884359541 +0000 UTC m=+16.030417081,LastTimestamp:2026-03-19 16:41:02.883980996 +0000 UTC m=+26.030038596,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.807818 4792 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4ba21dc6ed8d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:41:02.886915469 +0000 UTC m=+26.032973039,LastTimestamp:2026-03-19 16:41:02.886915469 +0000 UTC m=+26.032973039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.814423 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e4b9c86b0d149\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c86b0d149 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:38.877270345 +0000 UTC m=+2.023327885,LastTimestamp:2026-03-19 16:41:03.011406556 +0000 UTC m=+26.157464146,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.820713 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e4b9c9a4432f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c9a4432f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.205696242 +0000 UTC m=+2.351753812,LastTimestamp:2026-03-19 16:41:03.230353418 +0000 UTC m=+26.376410968,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.830501 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e4b9c9af968cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9c9af968cd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:39.217572045 +0000 UTC m=+2.363629585,LastTimestamp:2026-03-19 16:41:03.239556426 +0000 UTC m=+26.385614006,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.838534 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e4b9fc9935808\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 19 16:41:22 crc kubenswrapper[4792]: &Event{ObjectMeta:{kube-controller-manager-crc.189e4b9fc9935808 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 19 16:41:22 crc kubenswrapper[4792]: body: Mar 19 16:41:22 crc kubenswrapper[4792]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.88431412 +0000 UTC m=+16.030371660,LastTimestamp:2026-03-19 16:41:12.88290209 +0000 UTC m=+36.028959660,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 19 16:41:22 crc kubenswrapper[4792]: > Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.842820 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e4b9fc9940975\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e4b9fc9940975 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.884359541 +0000 UTC m=+16.030417081,LastTimestamp:2026-03-19 16:41:12.882977192 +0000 UTC m=+36.029034772,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.857255 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.857442 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.858701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.858805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.858830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.860406 4792 scope.go:117] "RemoveContainer" containerID="8dc065b903e610b6a6081eab1a4ab461e6483c55f5d183ad85073a37fce5e3cb" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.860748 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.883565 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:41:22 crc kubenswrapper[4792]: I0319 16:41:22.883695 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 16:41:22 crc kubenswrapper[4792]: E0319 16:41:22.889498 4792 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e4b9fc9935808\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 19 16:41:22 crc kubenswrapper[4792]: &Event{ObjectMeta:{kube-controller-manager-crc.189e4b9fc9935808 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 19 16:41:22 crc kubenswrapper[4792]: body: Mar 19 16:41:22 crc kubenswrapper[4792]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:40:52.88431412 +0000 UTC m=+16.030371660,LastTimestamp:2026-03-19 16:41:22.883663858 +0000 UTC m=+46.029721438,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 19 16:41:22 crc kubenswrapper[4792]: > Mar 19 16:41:23 crc kubenswrapper[4792]: I0319 16:41:23.681807 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:23 crc kubenswrapper[4792]: W0319 16:41:23.932357 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 19 16:41:23 crc kubenswrapper[4792]: E0319 16:41:23.932471 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 19 16:41:24 crc kubenswrapper[4792]: I0319 16:41:24.686624 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:25 crc kubenswrapper[4792]: I0319 16:41:25.680071 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:26 crc kubenswrapper[4792]: W0319 16:41:26.067135 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 19 16:41:26 crc kubenswrapper[4792]: E0319 16:41:26.068090 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 16:41:26 crc kubenswrapper[4792]: E0319 16:41:26.556326 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.557306 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.558684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.558788 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.558892 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.559002 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:41:26 crc kubenswrapper[4792]: E0319 16:41:26.566289 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.676945 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.802268 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.802677 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.804355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.804426 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:26 crc kubenswrapper[4792]: I0319 16:41:26.804451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:27 crc kubenswrapper[4792]: I0319 16:41:27.680645 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:27 crc kubenswrapper[4792]: E0319 16:41:27.812315 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:41:28 crc kubenswrapper[4792]: I0319 16:41:28.676088 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.680336 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.886458 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.886652 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.889190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.889303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.889324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.892224 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.987739 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.989677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.989727 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:29 crc kubenswrapper[4792]: I0319 16:41:29.989738 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:30 crc kubenswrapper[4792]: I0319 16:41:30.677923 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:31 crc kubenswrapper[4792]: I0319 16:41:31.679725 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:32 crc kubenswrapper[4792]: I0319 16:41:32.674737 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:33 crc kubenswrapper[4792]: E0319 16:41:33.562570 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 16:41:33 crc kubenswrapper[4792]: I0319 16:41:33.566852 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:33 crc kubenswrapper[4792]: I0319 16:41:33.568388 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:33 crc kubenswrapper[4792]: I0319 16:41:33.568439 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:33 crc kubenswrapper[4792]: I0319 16:41:33.568452 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:33 crc kubenswrapper[4792]: I0319 16:41:33.568488 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:41:33 crc kubenswrapper[4792]: E0319 16:41:33.573548 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 19 16:41:33 crc kubenswrapper[4792]: I0319 16:41:33.681375 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:34 crc kubenswrapper[4792]: I0319 16:41:34.675258 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:34 crc kubenswrapper[4792]: I0319 16:41:34.739449 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:34 crc kubenswrapper[4792]: I0319 16:41:34.740337 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:34 crc kubenswrapper[4792]: I0319 16:41:34.740363 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:34 crc kubenswrapper[4792]: I0319 16:41:34.740374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:34 crc kubenswrapper[4792]: I0319 16:41:34.740830 4792 scope.go:117] "RemoveContainer" containerID="8dc065b903e610b6a6081eab1a4ab461e6483c55f5d183ad85073a37fce5e3cb" Mar 19 16:41:34 crc kubenswrapper[4792]: I0319 16:41:34.999909 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 19 16:41:35 crc kubenswrapper[4792]: I0319 16:41:35.001768 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a"} Mar 19 16:41:35 crc kubenswrapper[4792]: I0319 16:41:35.001953 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:35 crc kubenswrapper[4792]: I0319 16:41:35.003293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:35 crc kubenswrapper[4792]: I0319 16:41:35.003330 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:35 crc kubenswrapper[4792]: I0319 16:41:35.003346 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:35 crc kubenswrapper[4792]: I0319 16:41:35.677610 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.005002 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.005527 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.007776 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a" exitCode=255 Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.007795 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a"} Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.008049 4792 scope.go:117] "RemoveContainer" containerID="8dc065b903e610b6a6081eab1a4ab461e6483c55f5d183ad85073a37fce5e3cb" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.008216 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.009237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.009270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.009278 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.009712 4792 scope.go:117] "RemoveContainer" containerID="57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a" Mar 19 16:41:36 crc kubenswrapper[4792]: E0319 16:41:36.009872 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.614063 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:41:36 crc kubenswrapper[4792]: I0319 16:41:36.674592 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:37 crc kubenswrapper[4792]: I0319 16:41:37.012409 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 19 16:41:37 crc kubenswrapper[4792]: I0319 16:41:37.014911 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:37 crc kubenswrapper[4792]: I0319 16:41:37.015895 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:37 crc kubenswrapper[4792]: I0319 16:41:37.015934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:37 crc kubenswrapper[4792]: I0319 16:41:37.015945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:37 crc kubenswrapper[4792]: I0319 16:41:37.016460 4792 scope.go:117] "RemoveContainer" containerID="57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a" Mar 19 16:41:37 crc kubenswrapper[4792]: E0319 16:41:37.016632 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:41:37 crc kubenswrapper[4792]: I0319 16:41:37.679214 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:37 crc kubenswrapper[4792]: E0319 16:41:37.812430 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:41:38 crc kubenswrapper[4792]: I0319 16:41:38.675018 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:39 crc kubenswrapper[4792]: I0319 16:41:39.679113 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:40 crc kubenswrapper[4792]: E0319 16:41:40.570790 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 16:41:40 crc kubenswrapper[4792]: I0319 16:41:40.574005 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:40 crc kubenswrapper[4792]: I0319 16:41:40.575581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:40 crc kubenswrapper[4792]: I0319 16:41:40.575741 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:40 crc kubenswrapper[4792]: I0319 16:41:40.575855 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:40 crc kubenswrapper[4792]: I0319 16:41:40.575967 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:41:40 crc kubenswrapper[4792]: E0319 16:41:40.580167 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 19 16:41:40 crc kubenswrapper[4792]: I0319 16:41:40.675957 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:41 crc kubenswrapper[4792]: I0319 16:41:41.679204 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:42 crc kubenswrapper[4792]: I0319 16:41:42.674833 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:42 crc kubenswrapper[4792]: I0319 16:41:42.857091 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:41:42 crc kubenswrapper[4792]: I0319 16:41:42.857307 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:42 crc kubenswrapper[4792]: I0319 16:41:42.858277 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:42 crc kubenswrapper[4792]: I0319 16:41:42.858312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:42 crc kubenswrapper[4792]: I0319 16:41:42.858324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:42 crc kubenswrapper[4792]: I0319 16:41:42.858873 4792 scope.go:117] "RemoveContainer" containerID="57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a" Mar 19 16:41:42 crc kubenswrapper[4792]: E0319 16:41:42.859059 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:41:43 crc kubenswrapper[4792]: I0319 16:41:43.675486 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:44 crc kubenswrapper[4792]: I0319 16:41:44.674944 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:45 crc kubenswrapper[4792]: I0319 16:41:45.675792 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:46 crc kubenswrapper[4792]: I0319 16:41:46.693794 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:47 crc kubenswrapper[4792]: E0319 16:41:47.579582 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 16:41:47 crc kubenswrapper[4792]: I0319 16:41:47.580455 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:47 crc kubenswrapper[4792]: I0319 16:41:47.582367 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:47 crc kubenswrapper[4792]: I0319 16:41:47.582623 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:47 crc kubenswrapper[4792]: I0319 16:41:47.582794 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:47 crc kubenswrapper[4792]: I0319 16:41:47.583092 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:41:47 crc kubenswrapper[4792]: E0319 16:41:47.587834 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 19 16:41:47 crc kubenswrapper[4792]: I0319 16:41:47.677526 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:47 crc kubenswrapper[4792]: E0319 16:41:47.812693 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:41:48 crc kubenswrapper[4792]: I0319 16:41:48.675238 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:48 crc kubenswrapper[4792]: I0319 16:41:48.795714 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 16:41:48 crc kubenswrapper[4792]: I0319 16:41:48.811332 4792 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 19 16:41:49 crc kubenswrapper[4792]: I0319 16:41:49.676469 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:50 crc kubenswrapper[4792]: I0319 16:41:50.674404 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:51 crc kubenswrapper[4792]: I0319 16:41:51.674903 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 16:41:52 crc kubenswrapper[4792]: I0319 16:41:52.628531 4792 csr.go:261] certificate signing request csr-bkpt8 is approved, waiting to be issued Mar 19 16:41:52 crc kubenswrapper[4792]: I0319 16:41:52.637376 4792 csr.go:257] certificate signing request csr-bkpt8 is issued Mar 19 16:41:52 crc kubenswrapper[4792]: I0319 16:41:52.738667 4792 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 19 16:41:53 crc kubenswrapper[4792]: I0319 16:41:53.515893 4792 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 19 16:41:53 crc kubenswrapper[4792]: I0319 16:41:53.639206 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-23 08:33:52.45407803 +0000 UTC Mar 19 16:41:53 crc kubenswrapper[4792]: I0319 16:41:53.639303 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 5967h51m58.814779147s for next certificate rotation Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.115546 4792 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.588984 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.590466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.590495 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.590506 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.590592 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.602065 4792 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.602163 4792 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.602179 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.605783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.605862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.605881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.605906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.605922 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:41:54Z","lastTransitionTime":"2026-03-19T16:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.633337 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.638097 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.638133 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.638148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.638165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.638178 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:41:54Z","lastTransitionTime":"2026-03-19T16:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.654708 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.659306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.659516 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.659644 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.659782 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.659955 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:41:54Z","lastTransitionTime":"2026-03-19T16:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.676614 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.681517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.681577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.681596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.681620 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.681637 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:41:54Z","lastTransitionTime":"2026-03-19T16:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.695095 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.695747 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.696145 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.739035 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.740615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.740682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.740701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:41:54 crc kubenswrapper[4792]: I0319 16:41:54.741602 4792 scope.go:117] "RemoveContainer" containerID="57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a" Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.741811 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.801351 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:54 crc kubenswrapper[4792]: E0319 16:41:54.901990 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.003142 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.103604 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.203765 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.304829 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.405473 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.505885 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.606889 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: I0319 16:41:55.636788 4792 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.707198 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.808240 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:55 crc kubenswrapper[4792]: E0319 16:41:55.909428 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.010340 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.110976 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.212053 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.313043 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.414040 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.515045 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.615309 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.715569 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.815802 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:56 crc kubenswrapper[4792]: E0319 16:41:56.915935 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.017138 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.117293 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.217829 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.318400 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.418948 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.519982 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.620721 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.721253 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.813970 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.821337 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:57 crc kubenswrapper[4792]: E0319 16:41:57.921682 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.022811 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.123118 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.223218 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.323393 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.424529 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.524780 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.625718 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.726488 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.827408 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:58 crc kubenswrapper[4792]: E0319 16:41:58.928042 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.029189 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.129535 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.229701 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.330044 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.430574 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.530880 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.630967 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.731329 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.831736 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:41:59 crc kubenswrapper[4792]: E0319 16:41:59.932811 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.033982 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.134159 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.234851 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.335325 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.436339 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.537257 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.637819 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.738265 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.838457 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:00 crc kubenswrapper[4792]: E0319 16:42:00.939062 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.040038 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.141202 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.241924 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.342925 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.443578 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.544701 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.645694 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.752220 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: I0319 16:42:01.752307 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:42:01 crc kubenswrapper[4792]: I0319 16:42:01.753780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:01 crc kubenswrapper[4792]: I0319 16:42:01.753883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:01 crc kubenswrapper[4792]: I0319 16:42:01.753909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.853035 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:01 crc kubenswrapper[4792]: E0319 16:42:01.953531 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.054439 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.155831 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.256530 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.357436 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.458544 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.558815 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.659397 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.759975 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.860292 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:02 crc kubenswrapper[4792]: E0319 16:42:02.961474 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.062405 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.163440 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.263659 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.364586 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.465462 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.565883 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.667174 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.767427 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.867731 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:03 crc kubenswrapper[4792]: E0319 16:42:03.968755 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.069450 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.170015 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.270715 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.371773 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.472721 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.573139 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.673855 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.733919 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.738187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.738232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.738244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.738296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.738309 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:04Z","lastTransitionTime":"2026-03-19T16:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.747526 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.752690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.752755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.752781 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.752819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.752913 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:04Z","lastTransitionTime":"2026-03-19T16:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.768677 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.774163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.774214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.774227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.774243 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.774254 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:04Z","lastTransitionTime":"2026-03-19T16:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.783516 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.787034 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.787077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.787092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.787112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.787127 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:04Z","lastTransitionTime":"2026-03-19T16:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.796921 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.797134 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.797176 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: I0319 16:42:04.840481 4792 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.897606 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:04 crc kubenswrapper[4792]: E0319 16:42:04.997736 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.098148 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.199173 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.300149 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.400535 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.500918 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.601563 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.702243 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: I0319 16:42:05.739337 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:42:05 crc kubenswrapper[4792]: I0319 16:42:05.740559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:05 crc kubenswrapper[4792]: I0319 16:42:05.740602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:05 crc kubenswrapper[4792]: I0319 16:42:05.740620 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:05 crc kubenswrapper[4792]: I0319 16:42:05.741530 4792 scope.go:117] "RemoveContainer" containerID="57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.741812 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.802731 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:05 crc kubenswrapper[4792]: E0319 16:42:05.903944 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.004792 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.106098 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.206429 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.307008 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.407946 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.508905 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.609375 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.710460 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.811101 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:06 crc kubenswrapper[4792]: E0319 16:42:06.911740 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.012986 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.114228 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.215227 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.315819 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.415946 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.516763 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.617375 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.718729 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: I0319 16:42:07.738927 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 16:42:07 crc kubenswrapper[4792]: I0319 16:42:07.740238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:07 crc kubenswrapper[4792]: I0319 16:42:07.740337 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:07 crc kubenswrapper[4792]: I0319 16:42:07.740399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.819074 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.819210 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 19 16:42:07 crc kubenswrapper[4792]: E0319 16:42:07.919800 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.020640 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.120813 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.221046 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.321618 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.423931 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.525176 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.625331 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.726388 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.827217 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:08 crc kubenswrapper[4792]: E0319 16:42:08.927601 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.028679 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.129252 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.230225 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.331082 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.431451 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.532063 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.632412 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.732775 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.833908 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:09 crc kubenswrapper[4792]: E0319 16:42:09.934325 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.035315 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.136259 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.237037 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.338033 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.438155 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.538928 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.639476 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.740171 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.841289 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:10 crc kubenswrapper[4792]: E0319 16:42:10.942210 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.043268 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.045358 4792 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.145413 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.145798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.145927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.146021 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.146121 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.248714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.248744 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.248753 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.248765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.248775 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.351637 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.351711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.351729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.351755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.351773 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.454783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.454864 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.454881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.454900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.454912 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.557177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.557230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.557242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.557257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.557268 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.659213 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.659253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.659265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.659280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.659292 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.702990 4792 apiserver.go:52] "Watching apiserver" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.709290 4792 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.710125 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.710498 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.710554 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.710596 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.710646 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.711353 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.711353 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.711428 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.711609 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.711660 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.712100 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.713281 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.713379 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.714187 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.715038 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.716443 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.716492 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.716504 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.716943 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.741391 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.753461 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.760984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.761012 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.761024 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.761038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.761049 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.772708 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.773044 4792 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.783653 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.783789 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.783773 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.783881 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.783940 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.783992 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784074 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784124 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784158 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784171 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784227 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784280 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784298 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784327 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784376 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784443 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784443 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784475 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784525 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784548 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784577 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784630 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784676 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784724 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784770 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784819 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.784958 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785013 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785064 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785119 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785172 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785220 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785270 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785321 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785367 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785413 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785461 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785510 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785562 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785607 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785654 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785713 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785764 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785816 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785909 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785961 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786011 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785019 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786060 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785045 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785082 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785217 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786140 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785419 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785458 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785398 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785617 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785625 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785652 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785641 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785580 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785829 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.785893 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786107 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786291 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786324 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786342 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786361 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786380 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786399 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786417 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786434 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786453 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786476 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786508 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786524 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786606 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786631 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786651 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786676 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786693 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786701 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786763 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786776 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786784 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786868 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786892 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786909 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786928 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.786938 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787068 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787104 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787128 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787141 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787145 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787190 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787192 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787215 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787238 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787255 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787272 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787486 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787502 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787524 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787540 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787555 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787571 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787561 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787587 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787587 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787606 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787626 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787644 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787702 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787726 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787751 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787769 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787785 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787648 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787800 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788543 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788615 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788672 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788725 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788780 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788829 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789477 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789548 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789610 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789664 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789791 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789877 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789935 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790182 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790241 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790299 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787796 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790354 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787788 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.787901 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788074 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790406 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790459 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790509 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790562 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791415 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791487 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791590 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791703 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791770 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791826 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791946 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792003 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792092 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792149 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792206 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792294 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792443 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792502 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792558 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792610 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792662 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792720 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792774 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.793783 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794554 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794591 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794618 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794643 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794665 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794691 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794714 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794732 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794750 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794767 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794782 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794798 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794814 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794892 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794912 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794931 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794952 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794968 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.794986 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.795003 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.795070 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.795088 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.795103 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.795118 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.795135 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.795152 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.795167 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788133 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788139 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788531 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788586 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788655 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.788976 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789050 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789279 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789524 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789691 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.789938 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790000 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790024 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790069 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790078 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.799202 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790110 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790263 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790265 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790421 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.790966 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791208 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791258 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791357 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791470 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791642 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791883 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791992 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.791982 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792164 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792308 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792302 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792790 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.792979 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.793000 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.793180 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.796508 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.796567 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.796591 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.796680 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.797179 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.797211 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.799822 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.797722 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.797713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.797743 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.798198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.798297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.797019 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.798609 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.799922 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.799929 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:42:12.299904547 +0000 UTC m=+95.445962087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.799946 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.800213 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.800456 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.800306 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.800917 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.800954 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.801386 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.801754 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.801788 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.801883 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.802106 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.802258 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.802609 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.802618 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.802642 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.803248 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.803270 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.803413 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.803559 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.803713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.804260 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.804318 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.804346 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.804450 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.804640 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.805054 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.805221 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.805222 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.805222 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.805386 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.805974 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.806154 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.806171 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.806523 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.806770 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.806954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807063 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807125 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807174 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807194 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807294 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807279 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807371 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807402 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807458 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807490 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807541 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807573 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807669 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807732 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807757 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807782 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807821 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807881 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807913 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807978 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808043 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808074 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808133 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808167 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808225 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808253 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808309 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807931 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808309 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808370 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808401 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808458 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808534 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.807684 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808044 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808619 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808924 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.809005 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.809004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.809026 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.809146 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.809665 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810095 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810512 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.808568 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810693 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810724 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810746 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810765 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810785 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810804 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810895 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810918 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810941 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810961 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810984 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811006 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.810646 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811369 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.811116 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811415 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811265 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811290 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811665 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811741 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.811574 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.812037 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.812254 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.812307 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.812318 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.812660 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.813270 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.813607 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.813669 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.813703 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.815460 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.815482 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.815528 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.815798 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.815929 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.816355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.816358 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.816418 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.816427 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.817332 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.817060 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.817473 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.817993 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:12.317963973 +0000 UTC m=+95.464021543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.818768 4792 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.819160 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.819417 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.819583 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.819731 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.820288 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.820483 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.823201 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.823448 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.823721 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.823798 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.824114 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.816304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.825813 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.826106 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.826437 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.826536 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832532 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832653 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832691 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832720 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832746 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832785 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832807 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832944 4792 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832960 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832971 4792 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832983 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.832993 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833003 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833011 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833021 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833031 4792 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833043 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833054 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833065 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833075 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833084 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833094 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833102 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833110 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833119 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833127 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833136 4792 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833146 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833155 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833163 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833173 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833181 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833190 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833199 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833208 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833218 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833227 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833236 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833245 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833255 4792 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833263 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833272 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833280 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833289 4792 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833300 4792 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833311 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833322 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833332 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833343 4792 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833352 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833363 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833373 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833383 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833394 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833406 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833417 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833429 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833440 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833450 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833461 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833472 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833483 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833494 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833504 4792 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833515 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833526 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833536 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833547 4792 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833557 4792 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833568 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833579 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833729 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833747 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833759 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833770 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833783 4792 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833795 4792 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833808 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833822 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.833833 4792 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834438 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834451 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834462 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834502 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834512 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834523 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834533 4792 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834545 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834556 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834567 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834577 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834588 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834599 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834649 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834663 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834674 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834791 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834807 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834818 4792 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834867 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834881 4792 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834891 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834902 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834913 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834947 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834984 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.834997 4792 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835009 4792 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835021 4792 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835032 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835045 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835057 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835068 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835080 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835090 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835102 4792 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835113 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835123 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835134 4792 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835146 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835157 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835168 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835224 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835239 4792 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835250 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.835261 4792 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.836035 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.836109 4792 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.836364 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.835089 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.836580 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:12.336559903 +0000 UTC m=+95.482617523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.836111 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.837990 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.838049 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.838154 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:12.338126917 +0000 UTC m=+95.484184567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.836506 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838216 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838238 4792 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838257 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838275 4792 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838295 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838312 4792 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838327 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838343 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838360 4792 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838377 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838393 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838409 4792 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838429 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838448 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838465 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838482 4792 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838498 4792 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838513 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838526 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838540 4792 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838554 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838570 4792 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838589 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838605 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838621 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838636 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838656 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838676 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838692 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838709 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838725 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838741 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838758 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838775 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838792 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838808 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838824 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838865 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838884 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838901 4792 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838919 4792 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838935 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838951 4792 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838967 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.838984 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839002 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839018 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839033 4792 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839049 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839066 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839082 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839098 4792 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839116 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839133 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839149 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839166 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.839181 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.845711 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.845949 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.847019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.851270 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.853216 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.854802 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.854902 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.854916 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.853605 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:11 crc kubenswrapper[4792]: E0319 16:42:11.854999 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:12.354980269 +0000 UTC m=+95.501037809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.854265 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.855490 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.857021 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.861196 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.861310 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.861429 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.861464 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.861502 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.861713 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.862427 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.865160 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.865189 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.865201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.865216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.865228 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.878507 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.882103 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.883379 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940607 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940783 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940806 4792 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940825 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940866 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940886 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940963 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940983 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940999 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.941012 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.941024 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.941037 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.941048 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.941058 4792 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.941068 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.941080 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.941093 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.940925 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.967100 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.967154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.967165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.967181 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:11 crc kubenswrapper[4792]: I0319 16:42:11.967190 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:11Z","lastTransitionTime":"2026-03-19T16:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.027463 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.034134 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.037987 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 19 16:42:12 crc kubenswrapper[4792]: W0319 16:42:12.046647 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-8dfebf3d0cca86bcf241edc52e3ef69363a068519229b0fa54006ae8a0ec9427 WatchSource:0}: Error finding container 8dfebf3d0cca86bcf241edc52e3ef69363a068519229b0fa54006ae8a0ec9427: Status 404 returned error can't find the container with id 8dfebf3d0cca86bcf241edc52e3ef69363a068519229b0fa54006ae8a0ec9427 Mar 19 16:42:12 crc kubenswrapper[4792]: W0319 16:42:12.048551 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-deadfc2b2dfc855c4d2e9d74ea60ca97980a7622d937a78ed73186bae4bf565a WatchSource:0}: Error finding container deadfc2b2dfc855c4d2e9d74ea60ca97980a7622d937a78ed73186bae4bf565a: Status 404 returned error can't find the container with id deadfc2b2dfc855c4d2e9d74ea60ca97980a7622d937a78ed73186bae4bf565a Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.069374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.069419 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.069432 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.069449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.069460 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.171984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.172028 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.172047 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.172069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.172085 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.235820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8dfebf3d0cca86bcf241edc52e3ef69363a068519229b0fa54006ae8a0ec9427"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.236527 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"35c090db8b718aaaa20bfd0acd085798b2ac109b507d314253a1ec68dd41f556"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.240687 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.240715 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"deadfc2b2dfc855c4d2e9d74ea60ca97980a7622d937a78ed73186bae4bf565a"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.274008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.274070 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.274091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.274116 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.274133 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.343459 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.343520 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.343554 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.343575 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343652 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:42:13.343622249 +0000 UTC m=+96.489679789 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343678 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343694 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343704 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343754 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:13.343741172 +0000 UTC m=+96.489798712 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343755 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343755 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343899 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:13.343880616 +0000 UTC m=+96.489938156 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.343917 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:13.343910697 +0000 UTC m=+96.489968237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.376176 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.376219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.376229 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.376244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.376256 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.444285 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.444509 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.444545 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.444556 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:12 crc kubenswrapper[4792]: E0319 16:42:12.444624 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:13.444607736 +0000 UTC m=+96.590665276 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.479542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.479593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.479606 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.479629 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.479642 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.581975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.582036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.582053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.582075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.582092 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.684027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.684065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.684077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.684093 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.684103 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.787010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.787063 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.787081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.787105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.787122 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.889923 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.889988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.889999 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.890036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.890053 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.993111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.993159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.993172 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.993190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:12 crc kubenswrapper[4792]: I0319 16:42:12.993204 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:12Z","lastTransitionTime":"2026-03-19T16:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.095934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.095997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.096015 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.096043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.096067 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.198801 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.198956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.198978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.199000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.199057 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.245030 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.247087 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.260823 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.275643 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.288383 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.301358 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.301387 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.301396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.301408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.301416 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.303981 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.319897 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.332552 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.345876 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.352258 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.352354 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.352436 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.352550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.352938 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:42:15.352917395 +0000 UTC m=+98.498974955 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.353032 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.353048 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.353070 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.353107 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:15.35309762 +0000 UTC m=+98.499155170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.353683 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.353748 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:15.353731757 +0000 UTC m=+98.499789297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.354007 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.354193 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:15.35415601 +0000 UTC m=+98.500213590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.357638 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.369952 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.382692 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.396077 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.403812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.403882 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.403903 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.403922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.403935 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.408323 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.453718 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.453924 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.453950 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.453965 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.454034 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:15.454014145 +0000 UTC m=+98.600071685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.506581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.506624 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.506635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.506651 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.506664 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.608904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.608956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.608971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.608988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.609000 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.711448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.711497 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.711509 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.711541 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.711554 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.739188 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.739357 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.739451 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.739491 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.739561 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:13 crc kubenswrapper[4792]: E0319 16:42:13.739627 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.746554 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.747142 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.748345 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.749069 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.750061 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.750730 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.751314 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.752689 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.753317 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.754411 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.754931 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.756087 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.756534 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.757053 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.758002 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.758619 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.759812 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.760185 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.760751 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.761685 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.762167 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.763081 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.763534 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.764488 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.764932 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.765536 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.766804 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.767261 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.768176 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.768605 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.769421 4792 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.769518 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.771137 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.771944 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.772325 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.773757 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.774419 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.775227 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.775825 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.776780 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.777238 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.778181 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.778770 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.779689 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.780142 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.781028 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.781572 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.782992 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.783592 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.784615 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.785307 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.786656 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.787340 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.787982 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.814248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.814303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.814315 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.814333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.814346 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.916765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.916798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.916809 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.916824 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:13 crc kubenswrapper[4792]: I0319 16:42:13.916833 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:13Z","lastTransitionTime":"2026-03-19T16:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.018711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.018785 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.018804 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.018831 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.018879 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.120719 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.120762 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.120770 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.120786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.120794 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.223632 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.223670 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.223683 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.223695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.223703 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.325901 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.325945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.325958 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.325979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.325994 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.428359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.428399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.428408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.428422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.428433 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.530550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.530591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.530600 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.530613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.530622 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.632908 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.632945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.632954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.632975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.632986 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.735428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.735472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.735482 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.735499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.735509 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.838630 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.838680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.838696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.838716 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.838729 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.940708 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.940743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.940753 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.940766 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:14 crc kubenswrapper[4792]: I0319 16:42:14.940775 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:14Z","lastTransitionTime":"2026-03-19T16:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.043165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.043217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.043227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.043241 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.043251 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.146092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.146132 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.146144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.146161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.146172 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.169445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.169495 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.169506 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.169522 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.169533 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.193634 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.198748 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.198783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.198792 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.198807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.198817 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.224393 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.230764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.230793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.230800 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.230813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.230822 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.246213 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.250310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.250339 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.250351 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.250367 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.250381 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.253304 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65"} Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.263337 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.267029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.267083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.267098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.267120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.267135 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.274103 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.282392 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.282554 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.284594 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.284626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.284639 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.284657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.284670 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.286005 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.299508 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.310451 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.320592 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.331015 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:15Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.372046 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.372185 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.372269 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:42:19.372226228 +0000 UTC m=+102.518283908 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.372335 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.372416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.372495 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.372601 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:19.372557807 +0000 UTC m=+102.518615347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.372675 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.372699 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.372711 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.372771 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:19.372759752 +0000 UTC m=+102.518817292 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.372917 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.373041 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:19.37301669 +0000 UTC m=+102.519074230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.387744 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.387801 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.387820 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.387861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.387875 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.473366 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.473558 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.473584 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.473605 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.473682 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:19.473659547 +0000 UTC m=+102.619717127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.491875 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.491935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.491952 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.491972 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.491985 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.594588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.594657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.594674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.594699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.594719 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.697766 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.697929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.697951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.697983 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.698004 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.739438 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.739515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.739439 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.739663 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.739784 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:15 crc kubenswrapper[4792]: E0319 16:42:15.740035 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.801317 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.801409 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.801435 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.801469 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.801493 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.905302 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.905362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.905384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.905413 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:15 crc kubenswrapper[4792]: I0319 16:42:15.905435 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:15Z","lastTransitionTime":"2026-03-19T16:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.007819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.007910 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.007924 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.007946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.007960 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.110139 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.110188 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.110199 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.110216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.110226 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.213956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.214009 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.214019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.214034 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.214047 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.317381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.317449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.317465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.317483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.317495 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.419722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.419751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.419760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.419772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.419780 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.522625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.522666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.522678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.522695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.522706 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.624599 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.624913 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.624923 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.624935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.624945 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.727426 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.727484 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.727503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.727527 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.727543 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.783481 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.784922 4792 scope.go:117] "RemoveContainer" containerID="57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.830184 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.830213 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.830222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.830234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.830246 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.931864 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.931896 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.931905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.931920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:16 crc kubenswrapper[4792]: I0319 16:42:16.931928 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:16Z","lastTransitionTime":"2026-03-19T16:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.034636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.034672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.034680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.034694 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.034704 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.136872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.136901 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.136910 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.136923 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.136933 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.240110 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.240160 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.240171 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.240191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.240204 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.262514 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.265015 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.265482 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.285807 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.302146 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.319858 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.332690 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.342543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.342574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.342583 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.342597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.342606 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.348417 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.366761 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.379340 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.444632 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.444676 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.444690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.444709 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.444723 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.546646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.546680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.546690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.546703 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.546713 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.649255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.649312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.649329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.649355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.649376 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.739236 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.739393 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:17 crc kubenswrapper[4792]: E0319 16:42:17.739495 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.739409 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:17 crc kubenswrapper[4792]: E0319 16:42:17.739587 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:17 crc kubenswrapper[4792]: E0319 16:42:17.748234 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.754222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.754286 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.754305 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.754336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.754358 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.761832 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.773717 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.798951 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.812905 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.828945 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.850444 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.856499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.856540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.856552 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.856568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.856580 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.862953 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.959682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.959728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.959740 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.959760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:17 crc kubenswrapper[4792]: I0319 16:42:17.959771 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:17Z","lastTransitionTime":"2026-03-19T16:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.061663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.061697 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.061706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.061720 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.061729 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.164830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.164914 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.164931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.164955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.164970 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.272035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.272081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.272098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.272120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.272137 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.374195 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.374226 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.374237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.374253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.374264 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.477067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.477127 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.477147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.477173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.477191 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.580190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.580256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.580279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.580309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.580338 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.683458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.683529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.683554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.683583 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.683604 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.786614 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.786665 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.786679 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.786697 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.786710 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.889478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.889524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.889538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.889556 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.889568 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.992550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.992610 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.992626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.992645 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:18 crc kubenswrapper[4792]: I0319 16:42:18.992658 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:18Z","lastTransitionTime":"2026-03-19T16:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.094633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.094705 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.094725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.094745 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.094760 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.197581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.197626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.197637 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.197653 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.197665 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.299972 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.300030 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.300042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.300061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.300076 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.402262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.402307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.402319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.402336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.402348 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.409601 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.409677 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.409705 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.409737 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.409781 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:42:27.409741472 +0000 UTC m=+110.555799022 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.409804 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.409861 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.409869 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:27.409855325 +0000 UTC m=+110.555912865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.409924 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.409954 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.409973 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.409930 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:27.409911207 +0000 UTC m=+110.555968807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.410039 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:27.41002439 +0000 UTC m=+110.556081970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.504563 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.504602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.504611 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.504625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.504635 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.510947 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.511071 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.511095 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.511108 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.511156 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:27.511142011 +0000 UTC m=+110.657199551 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.526072 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-cvfx6"] Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.526424 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cvfx6" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.528533 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.528762 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.528828 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.539400 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.552047 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.564482 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.575330 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.584890 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.596146 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.606174 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.606208 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.606217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.606230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.606239 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.608064 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.625819 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.708593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.708635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.708644 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.708659 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.708669 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.716969 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/812ae5e5-a1ff-42ef-b120-95b6f3a18957-hosts-file\") pod \"node-resolver-cvfx6\" (UID: \"812ae5e5-a1ff-42ef-b120-95b6f3a18957\") " pod="openshift-dns/node-resolver-cvfx6" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.717004 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhqp\" (UniqueName: \"kubernetes.io/projected/812ae5e5-a1ff-42ef-b120-95b6f3a18957-kube-api-access-mkhqp\") pod \"node-resolver-cvfx6\" (UID: \"812ae5e5-a1ff-42ef-b120-95b6f3a18957\") " pod="openshift-dns/node-resolver-cvfx6" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.738601 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.738645 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.738612 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.738719 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.738826 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:19 crc kubenswrapper[4792]: E0319 16:42:19.738954 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.811163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.811194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.811203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.811217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.811226 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.817564 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/812ae5e5-a1ff-42ef-b120-95b6f3a18957-hosts-file\") pod \"node-resolver-cvfx6\" (UID: \"812ae5e5-a1ff-42ef-b120-95b6f3a18957\") " pod="openshift-dns/node-resolver-cvfx6" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.817596 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkhqp\" (UniqueName: \"kubernetes.io/projected/812ae5e5-a1ff-42ef-b120-95b6f3a18957-kube-api-access-mkhqp\") pod \"node-resolver-cvfx6\" (UID: \"812ae5e5-a1ff-42ef-b120-95b6f3a18957\") " pod="openshift-dns/node-resolver-cvfx6" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.817694 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/812ae5e5-a1ff-42ef-b120-95b6f3a18957-hosts-file\") pod \"node-resolver-cvfx6\" (UID: \"812ae5e5-a1ff-42ef-b120-95b6f3a18957\") " pod="openshift-dns/node-resolver-cvfx6" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.835303 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkhqp\" (UniqueName: \"kubernetes.io/projected/812ae5e5-a1ff-42ef-b120-95b6f3a18957-kube-api-access-mkhqp\") pod \"node-resolver-cvfx6\" (UID: \"812ae5e5-a1ff-42ef-b120-95b6f3a18957\") " pod="openshift-dns/node-resolver-cvfx6" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.836979 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cvfx6" Mar 19 16:42:19 crc kubenswrapper[4792]: W0319 16:42:19.846497 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod812ae5e5_a1ff_42ef_b120_95b6f3a18957.slice/crio-9b6fb688d4c69cb9b602c409d53a7765fb8dc3cace8b231d73e5de8326c9184c WatchSource:0}: Error finding container 9b6fb688d4c69cb9b602c409d53a7765fb8dc3cace8b231d73e5de8326c9184c: Status 404 returned error can't find the container with id 9b6fb688d4c69cb9b602c409d53a7765fb8dc3cace8b231d73e5de8326c9184c Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.904651 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-szhln"] Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.905042 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mhtlt"] Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.905804 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vbvt5"] Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.906205 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vbvt5" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.906587 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.907582 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910218 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910363 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910498 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910556 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910749 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910788 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910878 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910951 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.910765 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.911212 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.911807 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.912038 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.912916 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.912938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.912947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.912960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.912968 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:19Z","lastTransitionTime":"2026-03-19T16:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.928218 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.942927 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.956338 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.969719 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.979311 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:19 crc kubenswrapper[4792]: I0319 16:42:19.993484 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:19Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.004406 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.015191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.015219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.015230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.015245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.015256 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018761 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-k8s-cni-cncf-io\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018789 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-cni-bin\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018806 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a9e72e9a-50c3-41db-8657-7ae683c7c13a-rootfs\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018821 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/159854fb-4797-4205-a888-ff4ae76d14e5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-cnibin\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018878 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-kubelet\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018900 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-system-cni-dir\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018918 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-os-release\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018933 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-system-cni-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018947 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-multus-certs\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.018995 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-os-release\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019011 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-netns\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkg8q\" (UniqueName: \"kubernetes.io/projected/a9e72e9a-50c3-41db-8657-7ae683c7c13a-kube-api-access-hkg8q\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019040 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/159854fb-4797-4205-a888-ff4ae76d14e5-cni-binary-copy\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019054 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkt9\" (UniqueName: \"kubernetes.io/projected/c71152a8-67de-430c-a09b-1535ebc93a9a-kube-api-access-5lkt9\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019069 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-cnibin\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019089 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-socket-dir-parent\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019106 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e72e9a-50c3-41db-8657-7ae683c7c13a-proxy-tls\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019119 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c71152a8-67de-430c-a09b-1535ebc93a9a-cni-binary-copy\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-cni-multus\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019145 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-cni-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019166 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019180 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-conf-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019194 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-etc-kubernetes\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e72e9a-50c3-41db-8657-7ae683c7c13a-mcd-auth-proxy-config\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019221 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6g7j\" (UniqueName: \"kubernetes.io/projected/159854fb-4797-4205-a888-ff4ae76d14e5-kube-api-access-z6g7j\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019234 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-hostroot\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.019247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-daemon-config\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.021421 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.033135 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.044732 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.053322 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.066392 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.081271 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.091742 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.104466 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.114543 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.117386 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.117422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.117435 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.117450 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.117461 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120171 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-system-cni-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120219 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-multus-certs\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-netns\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120278 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-os-release\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkg8q\" (UniqueName: \"kubernetes.io/projected/a9e72e9a-50c3-41db-8657-7ae683c7c13a-kube-api-access-hkg8q\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120327 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/159854fb-4797-4205-a888-ff4ae76d14e5-cni-binary-copy\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120338 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-multus-certs\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-cnibin\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120363 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-netns\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120384 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-system-cni-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120385 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-os-release\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120375 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkt9\" (UniqueName: \"kubernetes.io/projected/c71152a8-67de-430c-a09b-1535ebc93a9a-kube-api-access-5lkt9\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120669 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-cnibin\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120740 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-socket-dir-parent\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120776 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c71152a8-67de-430c-a09b-1535ebc93a9a-cni-binary-copy\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120798 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-cni-multus\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120825 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e72e9a-50c3-41db-8657-7ae683c7c13a-proxy-tls\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120864 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-cni-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120894 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120913 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e72e9a-50c3-41db-8657-7ae683c7c13a-mcd-auth-proxy-config\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120935 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6g7j\" (UniqueName: \"kubernetes.io/projected/159854fb-4797-4205-a888-ff4ae76d14e5-kube-api-access-z6g7j\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120957 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-hostroot\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120978 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-conf-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120997 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-etc-kubernetes\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121018 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-daemon-config\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121045 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-k8s-cni-cncf-io\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121066 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-cni-bin\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121086 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-cnibin\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121108 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-kubelet\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121120 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/159854fb-4797-4205-a888-ff4ae76d14e5-cni-binary-copy\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121130 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a9e72e9a-50c3-41db-8657-7ae683c7c13a-rootfs\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121164 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a9e72e9a-50c3-41db-8657-7ae683c7c13a-rootfs\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121185 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/159854fb-4797-4205-a888-ff4ae76d14e5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121198 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-cni-multus\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.120797 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-socket-dir-parent\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121223 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-os-release\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121277 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-system-cni-dir\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-system-cni-dir\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121506 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c71152a8-67de-430c-a09b-1535ebc93a9a-cni-binary-copy\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-os-release\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121618 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-conf-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-cni-dir\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121896 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-hostroot\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121898 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-run-k8s-cni-cncf-io\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121928 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-etc-kubernetes\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.121972 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-kubelet\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.122005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-host-var-lib-cni-bin\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.122005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c71152a8-67de-430c-a09b-1535ebc93a9a-cnibin\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.122011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/159854fb-4797-4205-a888-ff4ae76d14e5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.122204 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e72e9a-50c3-41db-8657-7ae683c7c13a-mcd-auth-proxy-config\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.122363 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c71152a8-67de-430c-a09b-1535ebc93a9a-multus-daemon-config\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.122485 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/159854fb-4797-4205-a888-ff4ae76d14e5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.125196 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e72e9a-50c3-41db-8657-7ae683c7c13a-proxy-tls\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.131205 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.136115 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6g7j\" (UniqueName: \"kubernetes.io/projected/159854fb-4797-4205-a888-ff4ae76d14e5-kube-api-access-z6g7j\") pod \"multus-additional-cni-plugins-mhtlt\" (UID: \"159854fb-4797-4205-a888-ff4ae76d14e5\") " pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.137372 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkt9\" (UniqueName: \"kubernetes.io/projected/c71152a8-67de-430c-a09b-1535ebc93a9a-kube-api-access-5lkt9\") pod \"multus-vbvt5\" (UID: \"c71152a8-67de-430c-a09b-1535ebc93a9a\") " pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.141050 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkg8q\" (UniqueName: \"kubernetes.io/projected/a9e72e9a-50c3-41db-8657-7ae683c7c13a-kube-api-access-hkg8q\") pod \"machine-config-daemon-szhln\" (UID: \"a9e72e9a-50c3-41db-8657-7ae683c7c13a\") " pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.143060 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.157094 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.170076 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.219602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.219645 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.219657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.219671 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.219681 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.220930 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vbvt5" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.229924 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.247961 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" Mar 19 16:42:20 crc kubenswrapper[4792]: W0319 16:42:20.260974 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9e72e9a_50c3_41db_8657_7ae683c7c13a.slice/crio-e40a8ca5bcb3ebc562c2bf2312087682540694ea1d43cf385d0ffddb5d8917c9 WatchSource:0}: Error finding container e40a8ca5bcb3ebc562c2bf2312087682540694ea1d43cf385d0ffddb5d8917c9: Status 404 returned error can't find the container with id e40a8ca5bcb3ebc562c2bf2312087682540694ea1d43cf385d0ffddb5d8917c9 Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.275877 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cvfx6" event={"ID":"812ae5e5-a1ff-42ef-b120-95b6f3a18957","Type":"ContainerStarted","Data":"0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.275961 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cvfx6" event={"ID":"812ae5e5-a1ff-42ef-b120-95b6f3a18957","Type":"ContainerStarted","Data":"9b6fb688d4c69cb9b602c409d53a7765fb8dc3cace8b231d73e5de8326c9184c"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.281189 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"e40a8ca5bcb3ebc562c2bf2312087682540694ea1d43cf385d0ffddb5d8917c9"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.281993 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" event={"ID":"159854fb-4797-4205-a888-ff4ae76d14e5","Type":"ContainerStarted","Data":"9af0ab43cd0ac0a0f165a154f886276882f86f065abf815cb6408e41f320aa27"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.287726 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.301419 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vbvt5" event={"ID":"c71152a8-67de-430c-a09b-1535ebc93a9a","Type":"ContainerStarted","Data":"18ba0bcf95411f028ae44d0cdb228e44f0fd4544f2793df854b6c8a5c067f7f6"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.307657 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.312712 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5tgtj"] Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.313484 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.319071 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.319236 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.319340 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.319350 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.319357 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.319491 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.320857 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.324019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.324043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.324052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.324065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.324074 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.344346 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.357893 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.369809 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.385220 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.403545 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.415371 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425134 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-bin\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425184 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-netns\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425208 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-systemd-units\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425229 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovn-node-metrics-cert\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425252 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-kubelet\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425276 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9w4l\" (UniqueName: \"kubernetes.io/projected/8705e1c9-d503-400f-93b0-b04ce7083d7a-kube-api-access-n9w4l\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425337 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-log-socket\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425408 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425671 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425738 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-var-lib-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425754 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-etc-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425770 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-env-overrides\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425874 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-netd\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425906 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-config\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425929 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-systemd\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425947 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-node-log\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425982 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-ovn\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.425996 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-script-lib\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.426009 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-slash\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.426026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.427157 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.428262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.428285 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.428293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.428306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.428314 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.439080 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.452127 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.469158 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.485468 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.497021 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.509900 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.522479 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-systemd-units\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526738 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovn-node-metrics-cert\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-kubelet\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526780 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-log-socket\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9w4l\" (UniqueName: \"kubernetes.io/projected/8705e1c9-d503-400f-93b0-b04ce7083d7a-kube-api-access-n9w4l\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526830 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526832 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-systemd-units\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526914 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526869 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-env-overrides\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.526983 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-var-lib-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527002 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-etc-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527049 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-netd\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527066 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-config\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527095 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-systemd\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527113 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-node-log\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527136 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-slash\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527153 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527169 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-ovn\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527186 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-script-lib\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527214 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-bin\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527235 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-netns\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527288 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-netns\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-kubelet\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527440 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-log-socket\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527449 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-node-log\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527488 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-var-lib-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527514 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-etc-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527549 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-netd\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527674 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527724 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-systemd\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-ovn\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527775 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-slash\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527783 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-env-overrides\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527818 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-openvswitch\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.527865 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-bin\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.528205 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-config\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.528232 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-script-lib\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.531105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.531126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.531135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.531147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.531155 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.533431 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovn-node-metrics-cert\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.535290 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.541198 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9w4l\" (UniqueName: \"kubernetes.io/projected/8705e1c9-d503-400f-93b0-b04ce7083d7a-kube-api-access-n9w4l\") pod \"ovnkube-node-5tgtj\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.547963 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.568037 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.580110 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.592674 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.607615 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.621017 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:20Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.633073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.633103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.633112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.633125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.633151 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.653894 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:20 crc kubenswrapper[4792]: W0319 16:42:20.663773 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8705e1c9_d503_400f_93b0_b04ce7083d7a.slice/crio-fdc2e0868c660cdbd06225d555c33aac97ebe4918b407524617baa3314527acc WatchSource:0}: Error finding container fdc2e0868c660cdbd06225d555c33aac97ebe4918b407524617baa3314527acc: Status 404 returned error can't find the container with id fdc2e0868c660cdbd06225d555c33aac97ebe4918b407524617baa3314527acc Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.734344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.734401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.734418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.734445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.734492 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.853822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.853876 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.853887 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.853900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.853910 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.956253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.956301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.956310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.956322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:20 crc kubenswrapper[4792]: I0319 16:42:20.956335 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:20Z","lastTransitionTime":"2026-03-19T16:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.059327 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.059363 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.059372 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.059386 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.059396 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.162334 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.162392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.162407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.162428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.162441 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.265120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.265155 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.265165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.265183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.265195 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.305691 4792 generic.go:334] "Generic (PLEG): container finished" podID="159854fb-4797-4205-a888-ff4ae76d14e5" containerID="3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6" exitCode=0 Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.305881 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" event={"ID":"159854fb-4797-4205-a888-ff4ae76d14e5","Type":"ContainerDied","Data":"3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.307531 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vbvt5" event={"ID":"c71152a8-67de-430c-a09b-1535ebc93a9a","Type":"ContainerStarted","Data":"9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.309036 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.309140 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.310244 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a" exitCode=0 Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.310280 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.310299 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"fdc2e0868c660cdbd06225d555c33aac97ebe4918b407524617baa3314527acc"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.323000 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.335322 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.348090 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.361371 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.371016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.371055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.371065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.371079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.371088 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.375716 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.387136 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.398965 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.415728 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.426578 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.438987 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.451885 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.464758 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.473181 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.473207 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.473216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.473228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.473236 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.475191 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.489129 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.506240 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.517203 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.526340 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.542726 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.552278 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.561261 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.572286 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.575121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.575155 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.575167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.575182 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.575194 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.584098 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.594408 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.604932 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:21Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.677630 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.677709 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.677723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.677739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.677751 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.738670 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.739026 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.738741 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:21 crc kubenswrapper[4792]: E0319 16:42:21.739142 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:21 crc kubenswrapper[4792]: E0319 16:42:21.739230 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:21 crc kubenswrapper[4792]: E0319 16:42:21.739290 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.780825 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.780901 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.780915 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.780941 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.780955 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.886366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.886409 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.886425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.886447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.886458 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.989538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.990183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.990194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.990209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:21 crc kubenswrapper[4792]: I0319 16:42:21.990219 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:21Z","lastTransitionTime":"2026-03-19T16:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.092625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.092657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.092667 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.092679 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.092688 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.195780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.195826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.195853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.195871 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.195883 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.298301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.298353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.298366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.298384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.298398 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.316172 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.316212 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.316222 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.316231 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.316240 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.316251 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.318205 4792 generic.go:334] "Generic (PLEG): container finished" podID="159854fb-4797-4205-a888-ff4ae76d14e5" containerID="b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c" exitCode=0 Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.318269 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" event={"ID":"159854fb-4797-4205-a888-ff4ae76d14e5","Type":"ContainerDied","Data":"b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.334224 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.347055 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.358139 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.371643 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.385857 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.394437 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.399984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.400015 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.400024 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.400039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.400048 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.405822 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.416180 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.426830 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.436394 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.446636 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.456478 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:22Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.502419 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.502470 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.502481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.502495 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.502506 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.609448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.610018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.610048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.610071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.610087 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.713130 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.713173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.713182 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.713221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.713232 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.815690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.815732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.815743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.815761 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.815775 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.918368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.918428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.918448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.918471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:22 crc kubenswrapper[4792]: I0319 16:42:22.918488 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:22Z","lastTransitionTime":"2026-03-19T16:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.021502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.021534 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.021542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.021556 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.021567 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.124568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.124607 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.124618 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.124635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.124649 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.229150 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.229207 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.229219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.229238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.229249 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.323515 4792 generic.go:334] "Generic (PLEG): container finished" podID="159854fb-4797-4205-a888-ff4ae76d14e5" containerID="18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970" exitCode=0 Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.323586 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" event={"ID":"159854fb-4797-4205-a888-ff4ae76d14e5","Type":"ContainerDied","Data":"18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.331395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.331425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.331456 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.331474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.331486 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.338499 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.358915 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.375597 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.385920 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.396028 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.404116 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.414500 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.424789 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.433664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.433702 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.433711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.433725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.433737 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.435908 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.453524 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.469863 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.483125 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:23Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.535667 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.535714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.535733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.535749 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.535759 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.637971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.638041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.638052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.638067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.638077 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.739004 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.739053 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.739137 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:23 crc kubenswrapper[4792]: E0319 16:42:23.739152 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:23 crc kubenswrapper[4792]: E0319 16:42:23.739232 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:23 crc kubenswrapper[4792]: E0319 16:42:23.739311 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.740821 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.740885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.740900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.740919 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.740941 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.843755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.843794 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.843803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.843817 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.843827 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.946356 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.946404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.946416 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.946433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:23 crc kubenswrapper[4792]: I0319 16:42:23.946445 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:23Z","lastTransitionTime":"2026-03-19T16:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.049129 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.049162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.049172 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.049184 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.049194 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.151411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.151442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.151451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.151465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.151473 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.254261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.254305 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.254319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.254337 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.254352 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.332368 4792 generic.go:334] "Generic (PLEG): container finished" podID="159854fb-4797-4205-a888-ff4ae76d14e5" containerID="fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5" exitCode=0 Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.332422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" event={"ID":"159854fb-4797-4205-a888-ff4ae76d14e5","Type":"ContainerDied","Data":"fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.347448 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.356713 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.356739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.356749 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.356763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.356771 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.368355 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.382432 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.394315 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.418810 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.434090 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.445164 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.459580 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.459629 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.459646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.459669 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.459688 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.461115 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.480586 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.492219 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.507682 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.518665 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:24Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.561665 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.561718 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.561740 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.561769 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.561786 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.664965 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.665023 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.665047 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.665074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.665095 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.767524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.767586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.767609 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.767639 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.767659 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.869693 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.869754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.869773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.869796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.869815 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.971934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.972310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.972374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.972402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:24 crc kubenswrapper[4792]: I0319 16:42:24.972741 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:24Z","lastTransitionTime":"2026-03-19T16:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.076087 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.076135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.076152 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.076175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.076193 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.178586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.178634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.178647 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.178666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.178681 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.281775 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.281821 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.281832 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.281881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.281893 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.322562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.322619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.322636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.322658 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.322675 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.341865 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a"} Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.344890 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.347165 4792 generic.go:334] "Generic (PLEG): container finished" podID="159854fb-4797-4205-a888-ff4ae76d14e5" containerID="47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd" exitCode=0 Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.347226 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" event={"ID":"159854fb-4797-4205-a888-ff4ae76d14e5","Type":"ContainerDied","Data":"47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.350930 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.350981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.351006 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.351033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.351055 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.370497 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.382019 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.388039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.388108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.388134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.388169 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.388193 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.391722 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.407210 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.411162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.411191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.411203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.411219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.411232 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.431230 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.435988 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.446623 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.446746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.446770 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.446796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.447099 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.454158 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.463753 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.463974 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.468572 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.468635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.468660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.468694 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.468718 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.471162 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.489639 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.510763 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.523716 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.534884 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.548587 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.563862 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.573728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.573777 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.573787 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.573801 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.573812 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.577833 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:25Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.675862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.675920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.675938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.675960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.675974 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.739244 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.739248 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.739431 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.739566 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.739542 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:25 crc kubenswrapper[4792]: E0319 16:42:25.739700 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.778048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.778095 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.778109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.778126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.778137 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.881380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.881599 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.881754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.881923 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.882050 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.985536 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.985569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.985579 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.985593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:25 crc kubenswrapper[4792]: I0319 16:42:25.985602 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:25Z","lastTransitionTime":"2026-03-19T16:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.087552 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.087590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.087600 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.087613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.087623 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.190160 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.190533 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.190748 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.191105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.191162 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.294658 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.294724 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.294743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.294771 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.294794 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.315414 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gfhg9"] Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.316117 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.320413 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.320422 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.320546 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.322177 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.347122 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.367008 4792 generic.go:334] "Generic (PLEG): container finished" podID="159854fb-4797-4205-a888-ff4ae76d14e5" containerID="761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104" exitCode=0 Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.367051 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" event={"ID":"159854fb-4797-4205-a888-ff4ae76d14e5","Type":"ContainerDied","Data":"761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.372812 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.384200 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-host\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.384313 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754jf\" (UniqueName: \"kubernetes.io/projected/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-kube-api-access-754jf\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.384632 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-serviceca\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.393671 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.404736 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.404817 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.404833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.404877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.404890 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.426981 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.445007 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.467875 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.485500 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-754jf\" (UniqueName: \"kubernetes.io/projected/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-kube-api-access-754jf\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.485547 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-serviceca\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.485573 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-host\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.485630 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-host\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.486741 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-serviceca\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.492698 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.505267 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-754jf\" (UniqueName: \"kubernetes.io/projected/7efa5c7e-77e2-464b-9a81-cc95b1fe63d6-kube-api-access-754jf\") pod \"node-ca-gfhg9\" (UID: \"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\") " pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.513293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.513347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.513361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.513384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.513403 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.518815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.532327 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.548783 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.560105 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.573854 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.585087 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.599899 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.610943 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.617445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.617480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.617490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.617505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.617514 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.627455 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.640161 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gfhg9" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.643773 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: W0319 16:42:26.653064 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7efa5c7e_77e2_464b_9a81_cc95b1fe63d6.slice/crio-f468257e4fc96a3bdfc48dda2cd109777a9d572d403655608e6afb07477932ea WatchSource:0}: Error finding container f468257e4fc96a3bdfc48dda2cd109777a9d572d403655608e6afb07477932ea: Status 404 returned error can't find the container with id f468257e4fc96a3bdfc48dda2cd109777a9d572d403655608e6afb07477932ea Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.659610 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.674160 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.687964 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.701386 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.718346 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.721282 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.721318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.721326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.721341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.721350 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.737119 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.747193 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.759660 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.771171 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.823525 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.823564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.823573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.823587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.823598 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.925977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.926041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.926052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.926066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:26 crc kubenswrapper[4792]: I0319 16:42:26.926075 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:26Z","lastTransitionTime":"2026-03-19T16:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.028587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.028625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.028634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.028650 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.028659 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.130723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.130773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.130787 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.130804 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.130816 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.233257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.233314 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.233326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.233343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.233356 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.336561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.336609 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.336621 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.336640 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.336653 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.380267 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.380502 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.380613 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.380644 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.385938 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" event={"ID":"159854fb-4797-4205-a888-ff4ae76d14e5","Type":"ContainerStarted","Data":"815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.388324 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gfhg9" event={"ID":"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6","Type":"ContainerStarted","Data":"87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.388378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gfhg9" event={"ID":"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6","Type":"ContainerStarted","Data":"f468257e4fc96a3bdfc48dda2cd109777a9d572d403655608e6afb07477932ea"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.401236 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.414383 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.416094 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.417979 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.431196 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.439994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.440039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.440052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.440069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.440082 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.444009 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.454639 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.468239 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.480247 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.495404 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.497782 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.497899 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.497947 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:42:43.497918489 +0000 UTC m=+126.643976039 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.497984 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.498040 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:43.498022722 +0000 UTC m=+126.644080282 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.498130 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.498174 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.498341 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.498353 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.498443 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:43.498422274 +0000 UTC m=+126.644479834 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.498368 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.498474 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.498517 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:43.498508956 +0000 UTC m=+126.644566506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.512292 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.525162 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.537777 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.542144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.542179 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.542188 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.542201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.542211 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.552973 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.572289 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.593874 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.599623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.599814 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.599877 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.599900 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.600000 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:43.599979236 +0000 UTC m=+126.746036786 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.606918 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.617437 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.630363 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.644063 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.644981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.645040 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.645064 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.645094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.645116 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.656784 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.670011 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.680012 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.688891 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.701746 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.713824 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.724253 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.733321 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.738653 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.738681 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.738745 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.738782 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.739030 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:27 crc kubenswrapper[4792]: E0319 16:42:27.739146 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.747764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.747817 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.747827 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.747864 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.747881 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.751016 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.760954 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.777206 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.794442 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.807881 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.821445 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.836187 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.847466 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.850101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.850131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.850144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.850161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.850171 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.855881 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.867805 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.881662 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.893967 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.905567 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.952032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.952065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.952075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.952090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:27 crc kubenswrapper[4792]: I0319 16:42:27.952103 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:27Z","lastTransitionTime":"2026-03-19T16:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.054189 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.054238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.054247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.054259 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.054268 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.157962 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.158006 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.158017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.158034 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.158045 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.260726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.260775 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.260786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.260805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.260817 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.363335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.363405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.363428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.363460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.363482 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.466276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.466319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.466329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.466343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.466351 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.568347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.568383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.568392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.568405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.568415 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.670799 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.670875 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.670891 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.670911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.670925 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.774645 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.774721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.774737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.774757 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.774768 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.877939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.878010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.878030 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.878058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.878075 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.981491 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.981564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.981580 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.981603 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:28 crc kubenswrapper[4792]: I0319 16:42:28.981621 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:28Z","lastTransitionTime":"2026-03-19T16:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.084816 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.084928 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.084948 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.084994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.085014 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.188018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.188059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.188074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.188091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.188107 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.291991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.292075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.292091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.292118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.292136 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.395321 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.395368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.395378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.395397 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.395409 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.498597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.498676 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.498691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.498715 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.498730 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.601049 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.601119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.601137 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.601158 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.601171 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.703414 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.703454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.703466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.703483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.703497 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.739213 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.739235 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:29 crc kubenswrapper[4792]: E0319 16:42:29.739446 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.739511 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:29 crc kubenswrapper[4792]: E0319 16:42:29.739640 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:29 crc kubenswrapper[4792]: E0319 16:42:29.739892 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.805808 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.805894 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.805911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.805934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.805950 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.908758 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.908819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.908833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.908872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:29 crc kubenswrapper[4792]: I0319 16:42:29.908889 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:29Z","lastTransitionTime":"2026-03-19T16:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.045143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.045223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.045239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.045271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.045288 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.149111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.149181 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.149202 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.149229 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.149247 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.251766 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.251821 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.251834 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.251873 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.251891 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.355154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.355212 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.355224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.355242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.355256 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.457336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.457404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.457418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.457435 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.457446 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.560497 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.560543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.560555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.560568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.560576 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.664512 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.664573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.664596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.664625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.664647 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.767263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.767309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.767322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.767340 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.767353 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.870311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.870390 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.870417 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.870448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.870475 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.973633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.973661 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.973670 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.973683 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:30 crc kubenswrapper[4792]: I0319 16:42:30.973693 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:30Z","lastTransitionTime":"2026-03-19T16:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.075975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.076040 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.076060 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.076085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.076102 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.179793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.179932 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.179957 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.179985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.180004 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.283376 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.283448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.283469 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.283496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.283514 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.387286 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.387327 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.387338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.387353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.387364 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.409187 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/0.log" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.412375 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b" exitCode=1 Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.412435 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.413740 4792 scope.go:117] "RemoveContainer" containerID="7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.433658 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.451823 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.466665 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.480964 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.489468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.489507 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.489516 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.489530 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.489542 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.494107 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.510066 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.523972 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.535953 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.548464 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.563874 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.578531 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.592779 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.594966 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.595000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.595010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.595027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.595039 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.610666 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:30Z\\\",\\\"message\\\":\\\"nformers/externalversions/factory.go:141\\\\nI0319 16:42:30.118720 6633 obj_retry.go:551] Creating *factory.egressNode crc took: 12.327565ms\\\\nI0319 16:42:30.118765 6633 factory.go:1336] Added *v1.Node event handler 7\\\\nI0319 16:42:30.118853 6633 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0319 16:42:30.118923 6633 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119072 6633 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0319 16:42:30.119090 6633 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0319 16:42:30.119114 6633 factory.go:656] Stopping watch factory\\\\nI0319 16:42:30.119132 6633 handler.go:208] Removed *v1.Node event handler 2\\\\nI0319 16:42:30.119141 6633 handler.go:208] Removed *v1.Node event handler 7\\\\nI0319 16:42:30.118824 6633 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119372 6633 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0319 16:42:30.119496 6633 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0319 16:42:30.119581 6633 ovnkube.go:599] Stopped ovnkube\\\\nI0319 16:42:30.119653 6633 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0319 16:42:30.119760 6633 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:31Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.697049 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.697092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.697103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.697120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.697133 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.739637 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.739715 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.739729 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:31 crc kubenswrapper[4792]: E0319 16:42:31.739750 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:31 crc kubenswrapper[4792]: E0319 16:42:31.739827 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:31 crc kubenswrapper[4792]: E0319 16:42:31.740065 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.799343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.799399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.799417 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.799444 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.799466 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.902296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.902362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.902379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.902401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:31 crc kubenswrapper[4792]: I0319 16:42:31.902417 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:31Z","lastTransitionTime":"2026-03-19T16:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.005223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.005287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.005306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.005331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.005348 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.107555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.107590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.107599 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.107612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.107622 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.210886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.210934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.210945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.210961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.210974 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.278638 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr"] Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.279397 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: W0319 16:42:32.282102 4792 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: secrets "ovn-control-plane-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Mar 19 16:42:32 crc kubenswrapper[4792]: E0319 16:42:32.282215 4792 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-control-plane-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.283043 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.307708 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.313190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.313263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.313288 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.313320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.313344 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.319687 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.330786 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.350434 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.357199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c52ab600-6188-4491-9186-622991c75340-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.357252 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c52ab600-6188-4491-9186-622991c75340-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.357277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j25gz\" (UniqueName: \"kubernetes.io/projected/c52ab600-6188-4491-9186-622991c75340-kube-api-access-j25gz\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.357310 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c52ab600-6188-4491-9186-622991c75340-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.363792 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.375026 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.383824 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.394375 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.404174 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.415228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.415264 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.415274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.415290 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.415301 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.416552 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/0.log" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.419279 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.424954 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.425879 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.437560 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:30Z\\\",\\\"message\\\":\\\"nformers/externalversions/factory.go:141\\\\nI0319 16:42:30.118720 6633 obj_retry.go:551] Creating *factory.egressNode crc took: 12.327565ms\\\\nI0319 16:42:30.118765 6633 factory.go:1336] Added *v1.Node event handler 7\\\\nI0319 16:42:30.118853 6633 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0319 16:42:30.118923 6633 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119072 6633 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0319 16:42:30.119090 6633 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0319 16:42:30.119114 6633 factory.go:656] Stopping watch factory\\\\nI0319 16:42:30.119132 6633 handler.go:208] Removed *v1.Node event handler 2\\\\nI0319 16:42:30.119141 6633 handler.go:208] Removed *v1.Node event handler 7\\\\nI0319 16:42:30.118824 6633 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119372 6633 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0319 16:42:30.119496 6633 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0319 16:42:30.119581 6633 ovnkube.go:599] Stopped ovnkube\\\\nI0319 16:42:30.119653 6633 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0319 16:42:30.119760 6633 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.451067 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.457883 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c52ab600-6188-4491-9186-622991c75340-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.457935 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c52ab600-6188-4491-9186-622991c75340-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.458155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j25gz\" (UniqueName: \"kubernetes.io/projected/c52ab600-6188-4491-9186-622991c75340-kube-api-access-j25gz\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.458217 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c52ab600-6188-4491-9186-622991c75340-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.458528 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c52ab600-6188-4491-9186-622991c75340-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.458726 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c52ab600-6188-4491-9186-622991c75340-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.467115 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.478913 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j25gz\" (UniqueName: \"kubernetes.io/projected/c52ab600-6188-4491-9186-622991c75340-kube-api-access-j25gz\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.479187 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.488779 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.499119 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.512056 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.518027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.518058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.518069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.518081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.518090 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.523911 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.534428 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.547135 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.565993 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:30Z\\\",\\\"message\\\":\\\"nformers/externalversions/factory.go:141\\\\nI0319 16:42:30.118720 6633 obj_retry.go:551] Creating *factory.egressNode crc took: 12.327565ms\\\\nI0319 16:42:30.118765 6633 factory.go:1336] Added *v1.Node event handler 7\\\\nI0319 16:42:30.118853 6633 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0319 16:42:30.118923 6633 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119072 6633 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0319 16:42:30.119090 6633 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0319 16:42:30.119114 6633 factory.go:656] Stopping watch factory\\\\nI0319 16:42:30.119132 6633 handler.go:208] Removed *v1.Node event handler 2\\\\nI0319 16:42:30.119141 6633 handler.go:208] Removed *v1.Node event handler 7\\\\nI0319 16:42:30.118824 6633 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119372 6633 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0319 16:42:30.119496 6633 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0319 16:42:30.119581 6633 ovnkube.go:599] Stopped ovnkube\\\\nI0319 16:42:30.119653 6633 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0319 16:42:30.119760 6633 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.577834 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.587618 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.598564 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.610008 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.619813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.619864 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.619879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.619893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.619903 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.623077 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.635714 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.648075 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.723030 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.723072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.723082 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.723094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.723106 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.825246 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.825283 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.825295 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.825310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.825318 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.863813 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.888152 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.907346 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.921793 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.932307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.932363 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.932385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.932411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.932431 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:32Z","lastTransitionTime":"2026-03-19T16:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.935792 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.954528 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.967536 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:32 crc kubenswrapper[4792]: I0319 16:42:32.984695 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:32Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.016095 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:30Z\\\",\\\"message\\\":\\\"nformers/externalversions/factory.go:141\\\\nI0319 16:42:30.118720 6633 obj_retry.go:551] Creating *factory.egressNode crc took: 12.327565ms\\\\nI0319 16:42:30.118765 6633 factory.go:1336] Added *v1.Node event handler 7\\\\nI0319 16:42:30.118853 6633 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0319 16:42:30.118923 6633 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119072 6633 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0319 16:42:30.119090 6633 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0319 16:42:30.119114 6633 factory.go:656] Stopping watch factory\\\\nI0319 16:42:30.119132 6633 handler.go:208] Removed *v1.Node event handler 2\\\\nI0319 16:42:30.119141 6633 handler.go:208] Removed *v1.Node event handler 7\\\\nI0319 16:42:30.118824 6633 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119372 6633 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0319 16:42:30.119496 6633 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0319 16:42:30.119581 6633 ovnkube.go:599] Stopped ovnkube\\\\nI0319 16:42:30.119653 6633 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0319 16:42:30.119760 6633 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.032977 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.035450 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.035505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.035524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.035549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.035572 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.054011 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.056358 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n8pzj"] Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.057408 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.057569 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.073067 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.089801 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.105259 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.122516 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.139699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.139743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.139754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.139772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.139786 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.144542 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.158260 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.166314 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vfs2\" (UniqueName: \"kubernetes.io/projected/ab985610-78ac-44cf-a2ee-9a4a52dc431f-kube-api-access-9vfs2\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.166388 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.172492 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.188009 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.227150 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:30Z\\\",\\\"message\\\":\\\"nformers/externalversions/factory.go:141\\\\nI0319 16:42:30.118720 6633 obj_retry.go:551] Creating *factory.egressNode crc took: 12.327565ms\\\\nI0319 16:42:30.118765 6633 factory.go:1336] Added *v1.Node event handler 7\\\\nI0319 16:42:30.118853 6633 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0319 16:42:30.118923 6633 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119072 6633 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0319 16:42:30.119090 6633 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0319 16:42:30.119114 6633 factory.go:656] Stopping watch factory\\\\nI0319 16:42:30.119132 6633 handler.go:208] Removed *v1.Node event handler 2\\\\nI0319 16:42:30.119141 6633 handler.go:208] Removed *v1.Node event handler 7\\\\nI0319 16:42:30.118824 6633 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119372 6633 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0319 16:42:30.119496 6633 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0319 16:42:30.119581 6633 ovnkube.go:599] Stopped ovnkube\\\\nI0319 16:42:30.119653 6633 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0319 16:42:30.119760 6633 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.241726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.241774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.241787 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.241803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.241815 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.257286 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.267495 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vfs2\" (UniqueName: \"kubernetes.io/projected/ab985610-78ac-44cf-a2ee-9a4a52dc431f-kube-api-access-9vfs2\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.267535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.267630 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.267677 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs podName:ab985610-78ac-44cf-a2ee-9a4a52dc431f nodeName:}" failed. No retries permitted until 2026-03-19 16:42:33.767664669 +0000 UTC m=+116.913722209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs") pod "network-metrics-daemon-n8pzj" (UID: "ab985610-78ac-44cf-a2ee-9a4a52dc431f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.270498 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.287734 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vfs2\" (UniqueName: \"kubernetes.io/projected/ab985610-78ac-44cf-a2ee-9a4a52dc431f-kube-api-access-9vfs2\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.287749 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.297799 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.306907 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.317434 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.329822 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.341423 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.343902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.343964 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.343988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.344018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.344040 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.353137 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.360708 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.429398 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/1.log" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.430044 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/0.log" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.432514 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82" exitCode=1 Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.432541 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.432570 4792 scope.go:117] "RemoveContainer" containerID="7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.433910 4792 scope.go:117] "RemoveContainer" containerID="5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82" Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.434163 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.445141 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.445731 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.445926 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.446061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.446212 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.446338 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.455431 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.458685 4792 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.458805 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c52ab600-6188-4491-9186-622991c75340-ovn-control-plane-metrics-cert podName:c52ab600-6188-4491-9186-622991c75340 nodeName:}" failed. No retries permitted until 2026-03-19 16:42:33.95878927 +0000 UTC m=+117.104846810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/c52ab600-6188-4491-9186-622991c75340-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-749d76644c-q4gqr" (UID: "c52ab600-6188-4491-9186-622991c75340") : failed to sync secret cache: timed out waiting for the condition Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.469747 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.480627 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.493736 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.507046 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.523162 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.545991 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.548927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.549124 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.549257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.549393 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.549600 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.574041 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be4225fd8a766d5571e694fcd5d668d5d9ba07c0f865b238230933b31205d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:30Z\\\",\\\"message\\\":\\\"nformers/externalversions/factory.go:141\\\\nI0319 16:42:30.118720 6633 obj_retry.go:551] Creating *factory.egressNode crc took: 12.327565ms\\\\nI0319 16:42:30.118765 6633 factory.go:1336] Added *v1.Node event handler 7\\\\nI0319 16:42:30.118853 6633 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0319 16:42:30.118923 6633 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119072 6633 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0319 16:42:30.119090 6633 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0319 16:42:30.119114 6633 factory.go:656] Stopping watch factory\\\\nI0319 16:42:30.119132 6633 handler.go:208] Removed *v1.Node event handler 2\\\\nI0319 16:42:30.119141 6633 handler.go:208] Removed *v1.Node event handler 7\\\\nI0319 16:42:30.118824 6633 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0319 16:42:30.119372 6633 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0319 16:42:30.119496 6633 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0319 16:42:30.119581 6633 ovnkube.go:599] Stopped ovnkube\\\\nI0319 16:42:30.119653 6633 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0319 16:42:30.119760 6633 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\":42:32.720165 6806 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720172 6806 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720177 6806 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0319 16:42:32.720202 6806 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF0319 16:42:32.720212 6806 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.593359 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.614202 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.630645 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.652175 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.654500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.654549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.654572 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.654602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.654627 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.679319 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.699547 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:33Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.738961 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.738970 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.739079 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.739180 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.739301 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.739384 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.757574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.757613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.757622 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.757636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.757646 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.764920 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.773016 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.773243 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:33 crc kubenswrapper[4792]: E0319 16:42:33.773328 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs podName:ab985610-78ac-44cf-a2ee-9a4a52dc431f nodeName:}" failed. No retries permitted until 2026-03-19 16:42:34.773305315 +0000 UTC m=+117.919362895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs") pod "network-metrics-daemon-n8pzj" (UID: "ab985610-78ac-44cf-a2ee-9a4a52dc431f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.860591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.860656 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.860676 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.860704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.860722 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.964498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.964611 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.964643 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.964724 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.964750 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:33Z","lastTransitionTime":"2026-03-19T16:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.975282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c52ab600-6188-4491-9186-622991c75340-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:33 crc kubenswrapper[4792]: I0319 16:42:33.986676 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c52ab600-6188-4491-9186-622991c75340-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q4gqr\" (UID: \"c52ab600-6188-4491-9186-622991c75340\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.067673 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.068005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.068135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.068279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.068514 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.094298 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.170761 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.170791 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.170803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.170820 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.170831 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.274386 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.274433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.274446 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.274463 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.274475 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.376864 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.377140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.377151 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.377166 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.377177 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.436815 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/1.log" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.440710 4792 scope.go:117] "RemoveContainer" containerID="5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82" Mar 19 16:42:34 crc kubenswrapper[4792]: E0319 16:42:34.440940 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.441087 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" event={"ID":"c52ab600-6188-4491-9186-622991c75340","Type":"ContainerStarted","Data":"03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.441130 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" event={"ID":"c52ab600-6188-4491-9186-622991c75340","Type":"ContainerStarted","Data":"8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.441141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" event={"ID":"c52ab600-6188-4491-9186-622991c75340","Type":"ContainerStarted","Data":"d4e5a8c41f9cb799ddaeed90d854930a2638a020491de59190265e908f904f5f"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.454199 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.467312 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.481893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.481945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.481958 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.481980 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.481991 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.486088 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.498533 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.510337 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.521307 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.533626 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.547074 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.584189 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.584227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.584239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.584255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.584268 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.589912 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.612760 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.629781 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.644263 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.667029 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.685490 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\":42:32.720165 6806 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720172 6806 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720177 6806 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0319 16:42:32.720202 6806 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF0319 16:42:32.720212 6806 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.686947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.686993 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.687004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.687021 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.687033 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.695584 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:34Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.738831 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:34 crc kubenswrapper[4792]: E0319 16:42:34.738969 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.782301 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:34 crc kubenswrapper[4792]: E0319 16:42:34.782459 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:34 crc kubenswrapper[4792]: E0319 16:42:34.782502 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs podName:ab985610-78ac-44cf-a2ee-9a4a52dc431f nodeName:}" failed. No retries permitted until 2026-03-19 16:42:36.782490208 +0000 UTC m=+119.928547748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs") pod "network-metrics-daemon-n8pzj" (UID: "ab985610-78ac-44cf-a2ee-9a4a52dc431f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.789177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.789204 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.789217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.789253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.789265 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.891635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.891666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.891675 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.891689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.891698 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.994438 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.994476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.994485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.994498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:34 crc kubenswrapper[4792]: I0319 16:42:34.994507 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:34Z","lastTransitionTime":"2026-03-19T16:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.097078 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.097113 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.097123 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.097139 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.097149 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.200003 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.200043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.200052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.200067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.200075 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.303129 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.303179 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.303194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.303211 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.303224 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.405527 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.405561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.405571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.405582 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.405592 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.461277 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.475997 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.488338 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.500677 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.507305 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.507329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.507338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.507350 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.507359 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.512815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.524213 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.543156 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.563215 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\":42:32.720165 6806 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720172 6806 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720177 6806 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0319 16:42:32.720202 6806 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF0319 16:42:32.720212 6806 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.574318 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.590037 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.602905 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.608964 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.608996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.609005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.609018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.609026 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.616876 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.631371 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.644973 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.661577 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.682215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.682262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.682279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.682294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.682305 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.698412 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.704034 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.704262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.704274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.704296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.704309 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.720315 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.725325 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.725361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.725373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.725389 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.725401 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.739342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.739342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.739982 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.740093 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.740233 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.740471 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.740536 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.744025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.744055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.744067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.744084 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.744099 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.756104 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.760267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.760312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.760325 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.760344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.760356 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.772892 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:35Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:35 crc kubenswrapper[4792]: E0319 16:42:35.773046 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.774274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.774309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.774322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.774341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.774362 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.877005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.877073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.877090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.877141 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.877155 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.980401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.980788 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.980966 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.981141 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:35 crc kubenswrapper[4792]: I0319 16:42:35.981310 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:35Z","lastTransitionTime":"2026-03-19T16:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.083919 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.084767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.084968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.085159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.085308 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.189614 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.189696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.189721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.189751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.189773 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.293566 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.293628 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.293646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.293671 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.293688 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.397371 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.397448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.397474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.397503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.397520 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.500520 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.500597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.500621 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.500652 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.500674 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.603175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.603267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.603282 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.603303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.603319 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.705539 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.705565 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.705574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.705588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.705596 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.739067 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:36 crc kubenswrapper[4792]: E0319 16:42:36.739176 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.804702 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:36 crc kubenswrapper[4792]: E0319 16:42:36.804809 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:36 crc kubenswrapper[4792]: E0319 16:42:36.804868 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs podName:ab985610-78ac-44cf-a2ee-9a4a52dc431f nodeName:}" failed. No retries permitted until 2026-03-19 16:42:40.804854976 +0000 UTC m=+123.950912516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs") pod "network-metrics-daemon-n8pzj" (UID: "ab985610-78ac-44cf-a2ee-9a4a52dc431f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.807798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.807827 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.807851 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.807869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.807892 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.910173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.910239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.910258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.910298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:36 crc kubenswrapper[4792]: I0319 16:42:36.910318 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:36Z","lastTransitionTime":"2026-03-19T16:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.013097 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.013139 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.013150 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.013166 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.013176 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:37Z","lastTransitionTime":"2026-03-19T16:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.116020 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.116105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.116131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.116166 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.116193 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:37Z","lastTransitionTime":"2026-03-19T16:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.219441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.219518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.219542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.219571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.219588 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:37Z","lastTransitionTime":"2026-03-19T16:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.322872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.322933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.322950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.322972 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.322990 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:37Z","lastTransitionTime":"2026-03-19T16:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.425990 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.426081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.426105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.426138 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.426161 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:37Z","lastTransitionTime":"2026-03-19T16:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.529190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.529227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.529236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.529250 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.529260 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:37Z","lastTransitionTime":"2026-03-19T16:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.631489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.631550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.631568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.631592 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.631612 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:37Z","lastTransitionTime":"2026-03-19T16:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:37 crc kubenswrapper[4792]: E0319 16:42:37.732321 4792 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.739476 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:37 crc kubenswrapper[4792]: E0319 16:42:37.741035 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.741464 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:37 crc kubenswrapper[4792]: E0319 16:42:37.741615 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.746635 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:37 crc kubenswrapper[4792]: E0319 16:42:37.746821 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.761304 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.786646 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.807732 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.824213 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: E0319 16:42:37.826018 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.840358 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.857885 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.876970 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.896328 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.915882 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.937570 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.958452 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.973629 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:37 crc kubenswrapper[4792]: I0319 16:42:37.997284 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:38 crc kubenswrapper[4792]: I0319 16:42:38.029082 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\":42:32.720165 6806 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720172 6806 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720177 6806 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0319 16:42:32.720202 6806 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF0319 16:42:32.720212 6806 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:38Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:38 crc kubenswrapper[4792]: I0319 16:42:38.048282 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:38Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:38 crc kubenswrapper[4792]: I0319 16:42:38.739077 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:38 crc kubenswrapper[4792]: E0319 16:42:38.739259 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:39 crc kubenswrapper[4792]: I0319 16:42:39.738765 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:39 crc kubenswrapper[4792]: I0319 16:42:39.738802 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:39 crc kubenswrapper[4792]: I0319 16:42:39.738816 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:39 crc kubenswrapper[4792]: E0319 16:42:39.738961 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:39 crc kubenswrapper[4792]: E0319 16:42:39.739048 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:39 crc kubenswrapper[4792]: E0319 16:42:39.739180 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:40 crc kubenswrapper[4792]: I0319 16:42:40.739205 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:40 crc kubenswrapper[4792]: E0319 16:42:40.739754 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:40 crc kubenswrapper[4792]: I0319 16:42:40.851389 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:40 crc kubenswrapper[4792]: E0319 16:42:40.851967 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:40 crc kubenswrapper[4792]: E0319 16:42:40.852086 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs podName:ab985610-78ac-44cf-a2ee-9a4a52dc431f nodeName:}" failed. No retries permitted until 2026-03-19 16:42:48.852060652 +0000 UTC m=+131.998118232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs") pod "network-metrics-daemon-n8pzj" (UID: "ab985610-78ac-44cf-a2ee-9a4a52dc431f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:41 crc kubenswrapper[4792]: I0319 16:42:41.739006 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:41 crc kubenswrapper[4792]: I0319 16:42:41.739039 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:41 crc kubenswrapper[4792]: I0319 16:42:41.739165 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:41 crc kubenswrapper[4792]: E0319 16:42:41.739326 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:41 crc kubenswrapper[4792]: E0319 16:42:41.739518 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:41 crc kubenswrapper[4792]: E0319 16:42:41.739571 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:42 crc kubenswrapper[4792]: I0319 16:42:42.739564 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:42 crc kubenswrapper[4792]: E0319 16:42:42.739799 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:42 crc kubenswrapper[4792]: E0319 16:42:42.827525 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:42:43 crc kubenswrapper[4792]: I0319 16:42:43.583293 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:42:43 crc kubenswrapper[4792]: I0319 16:42:43.583450 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:43 crc kubenswrapper[4792]: I0319 16:42:43.583478 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:43 crc kubenswrapper[4792]: I0319 16:42:43.583500 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.583592 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.583647 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:43:15.583634765 +0000 UTC m=+158.729692305 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.583826 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.583914 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:43:15.583883062 +0000 UTC m=+158.729940612 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.583930 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.583965 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.584015 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-19 16:43:15.584003195 +0000 UTC m=+158.730060745 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.584454 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.584517 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:43:15.5845048 +0000 UTC m=+158.730562350 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:42:43 crc kubenswrapper[4792]: I0319 16:42:43.684421 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.684624 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.684915 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.684932 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.684989 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-19 16:43:15.684973162 +0000 UTC m=+158.831030712 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:42:43 crc kubenswrapper[4792]: I0319 16:42:43.739079 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.739232 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:43 crc kubenswrapper[4792]: I0319 16:42:43.739080 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.739511 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:43 crc kubenswrapper[4792]: I0319 16:42:43.739712 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:43 crc kubenswrapper[4792]: E0319 16:42:43.740578 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:44 crc kubenswrapper[4792]: I0319 16:42:44.739636 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:44 crc kubenswrapper[4792]: E0319 16:42:44.739890 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:45 crc kubenswrapper[4792]: I0319 16:42:45.739581 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:45 crc kubenswrapper[4792]: I0319 16:42:45.739656 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:45 crc kubenswrapper[4792]: I0319 16:42:45.739656 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:45 crc kubenswrapper[4792]: E0319 16:42:45.739779 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:45 crc kubenswrapper[4792]: E0319 16:42:45.740587 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:45 crc kubenswrapper[4792]: E0319 16:42:45.740734 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:45 crc kubenswrapper[4792]: I0319 16:42:45.741187 4792 scope.go:117] "RemoveContainer" containerID="5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.032364 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.032602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.032611 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.032624 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.032636 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:46Z","lastTransitionTime":"2026-03-19T16:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:46 crc kubenswrapper[4792]: E0319 16:42:46.051961 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.056539 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.056582 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.056596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.056617 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.056632 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:46Z","lastTransitionTime":"2026-03-19T16:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:46 crc kubenswrapper[4792]: E0319 16:42:46.075991 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.081589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.081627 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.081687 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.081709 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.081723 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:46Z","lastTransitionTime":"2026-03-19T16:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:46 crc kubenswrapper[4792]: E0319 16:42:46.100298 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.104819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.104860 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.104871 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.104885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.104896 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:46Z","lastTransitionTime":"2026-03-19T16:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:46 crc kubenswrapper[4792]: E0319 16:42:46.120202 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.124376 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.124405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.124417 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.124431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.124443 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:46Z","lastTransitionTime":"2026-03-19T16:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:46 crc kubenswrapper[4792]: E0319 16:42:46.143540 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: E0319 16:42:46.143763 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.489050 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/1.log" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.492099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c"} Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.492748 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.508354 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.523910 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.541385 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.554267 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.566668 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.583551 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.596162 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.619457 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.644277 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\":42:32.720165 6806 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720172 6806 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720177 6806 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0319 16:42:32.720202 6806 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF0319 16:42:32.720212 6806 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.657891 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.669458 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.681442 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.692669 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.704335 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.714418 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:46Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:46 crc kubenswrapper[4792]: I0319 16:42:46.738875 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:46 crc kubenswrapper[4792]: E0319 16:42:46.738979 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.498542 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/2.log" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.500077 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/1.log" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.504696 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c" exitCode=1 Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.504766 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c"} Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.504828 4792 scope.go:117] "RemoveContainer" containerID="5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.510232 4792 scope.go:117] "RemoveContainer" containerID="f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c" Mar 19 16:42:47 crc kubenswrapper[4792]: E0319 16:42:47.511120 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.533382 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.554570 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.576056 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.595885 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.613214 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.631120 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.653417 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.666832 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.682491 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.693141 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.704285 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.714179 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.728815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.739421 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:47 crc kubenswrapper[4792]: E0319 16:42:47.739521 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.739578 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:47 crc kubenswrapper[4792]: E0319 16:42:47.739629 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.740362 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:47 crc kubenswrapper[4792]: E0319 16:42:47.740428 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.749651 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.753781 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\":42:32.720165 6806 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720172 6806 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720177 6806 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0319 16:42:32.720202 6806 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF0319 16:42:32.720212 6806 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.762334 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.774488 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.787519 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.797970 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.808908 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.824321 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: E0319 16:42:47.828385 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.846419 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.860102 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.869834 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.882816 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.895450 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.908307 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.929741 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac6b4ccdfabd7b2e4190d2a4ec2c02acd8712179d37aaf703ac19dae83cbb82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"message\\\":\\\":42:32.720165 6806 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720172 6806 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0319 16:42:32.720177 6806 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0319 16:42:32.720202 6806 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nF0319 16:42:32.720212 6806 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.942915 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.959529 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.974908 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:47 crc kubenswrapper[4792]: I0319 16:42:47.986634 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:47Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.512636 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/2.log" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.518268 4792 scope.go:117] "RemoveContainer" containerID="f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c" Mar 19 16:42:48 crc kubenswrapper[4792]: E0319 16:42:48.518589 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.539761 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.558509 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.579379 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.599604 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.623458 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.640149 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.659708 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.679262 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.703775 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.726493 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.739232 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:48 crc kubenswrapper[4792]: E0319 16:42:48.739428 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.742357 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.760374 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.778265 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.801835 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.832833 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.852182 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:48Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:48 crc kubenswrapper[4792]: I0319 16:42:48.952792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:48 crc kubenswrapper[4792]: E0319 16:42:48.952997 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:48 crc kubenswrapper[4792]: E0319 16:42:48.953082 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs podName:ab985610-78ac-44cf-a2ee-9a4a52dc431f nodeName:}" failed. No retries permitted until 2026-03-19 16:43:04.953059368 +0000 UTC m=+148.099116938 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs") pod "network-metrics-daemon-n8pzj" (UID: "ab985610-78ac-44cf-a2ee-9a4a52dc431f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:42:49 crc kubenswrapper[4792]: I0319 16:42:49.739432 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:49 crc kubenswrapper[4792]: I0319 16:42:49.739622 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:49 crc kubenswrapper[4792]: E0319 16:42:49.739694 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:49 crc kubenswrapper[4792]: I0319 16:42:49.739734 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:49 crc kubenswrapper[4792]: E0319 16:42:49.739983 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:49 crc kubenswrapper[4792]: E0319 16:42:49.740026 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:50 crc kubenswrapper[4792]: I0319 16:42:50.739524 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:50 crc kubenswrapper[4792]: E0319 16:42:50.739762 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:51 crc kubenswrapper[4792]: I0319 16:42:51.738944 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:51 crc kubenswrapper[4792]: I0319 16:42:51.738990 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:51 crc kubenswrapper[4792]: I0319 16:42:51.738958 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:51 crc kubenswrapper[4792]: E0319 16:42:51.739151 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:51 crc kubenswrapper[4792]: E0319 16:42:51.739451 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:51 crc kubenswrapper[4792]: E0319 16:42:51.739354 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:52 crc kubenswrapper[4792]: I0319 16:42:52.739198 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:52 crc kubenswrapper[4792]: E0319 16:42:52.739671 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:52 crc kubenswrapper[4792]: I0319 16:42:52.754404 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 19 16:42:52 crc kubenswrapper[4792]: E0319 16:42:52.829298 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:42:53 crc kubenswrapper[4792]: I0319 16:42:53.739768 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:53 crc kubenswrapper[4792]: I0319 16:42:53.739977 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:53 crc kubenswrapper[4792]: I0319 16:42:53.740052 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:53 crc kubenswrapper[4792]: E0319 16:42:53.740339 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:53 crc kubenswrapper[4792]: E0319 16:42:53.740428 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:53 crc kubenswrapper[4792]: E0319 16:42:53.740579 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:54 crc kubenswrapper[4792]: I0319 16:42:54.738882 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:54 crc kubenswrapper[4792]: E0319 16:42:54.739029 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:55 crc kubenswrapper[4792]: I0319 16:42:55.738759 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:55 crc kubenswrapper[4792]: I0319 16:42:55.738800 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:55 crc kubenswrapper[4792]: E0319 16:42:55.739381 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:55 crc kubenswrapper[4792]: I0319 16:42:55.738968 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:55 crc kubenswrapper[4792]: E0319 16:42:55.739500 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:55 crc kubenswrapper[4792]: E0319 16:42:55.739718 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.148796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.148870 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.148887 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.148909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.148925 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:56Z","lastTransitionTime":"2026-03-19T16:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:56 crc kubenswrapper[4792]: E0319 16:42:56.166907 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:56Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.171713 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.171759 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.171776 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.171798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.171816 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:56Z","lastTransitionTime":"2026-03-19T16:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:56 crc kubenswrapper[4792]: E0319 16:42:56.186015 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:56Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.191602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.191666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.191685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.191714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.191742 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:56Z","lastTransitionTime":"2026-03-19T16:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:56 crc kubenswrapper[4792]: E0319 16:42:56.206587 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:56Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.211608 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.211648 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.211663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.211684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.211699 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:56Z","lastTransitionTime":"2026-03-19T16:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:56 crc kubenswrapper[4792]: E0319 16:42:56.230685 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:56Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.235225 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.235263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.235276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.235296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.235310 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:42:56Z","lastTransitionTime":"2026-03-19T16:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:42:56 crc kubenswrapper[4792]: E0319 16:42:56.255255 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:56Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:56 crc kubenswrapper[4792]: E0319 16:42:56.255431 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:42:56 crc kubenswrapper[4792]: I0319 16:42:56.741035 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:56 crc kubenswrapper[4792]: E0319 16:42:56.741183 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.739230 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.739277 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.739236 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:57 crc kubenswrapper[4792]: E0319 16:42:57.739395 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:57 crc kubenswrapper[4792]: E0319 16:42:57.739480 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:57 crc kubenswrapper[4792]: E0319 16:42:57.739595 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.755053 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.766351 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.780313 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.791059 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.802727 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.820229 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: E0319 16:42:57.829979 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.838271 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.851909 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.867418 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.879708 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.893738 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.906298 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.917123 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.927570 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.935721 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.945714 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:57 crc kubenswrapper[4792]: I0319 16:42:57.957243 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:42:57Z is after 2025-08-24T17:21:41Z" Mar 19 16:42:58 crc kubenswrapper[4792]: I0319 16:42:58.739285 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:42:58 crc kubenswrapper[4792]: E0319 16:42:58.739472 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:42:59 crc kubenswrapper[4792]: I0319 16:42:59.739419 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:42:59 crc kubenswrapper[4792]: E0319 16:42:59.739572 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:42:59 crc kubenswrapper[4792]: I0319 16:42:59.739664 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:42:59 crc kubenswrapper[4792]: I0319 16:42:59.739758 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:42:59 crc kubenswrapper[4792]: E0319 16:42:59.740365 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:42:59 crc kubenswrapper[4792]: E0319 16:42:59.740491 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:42:59 crc kubenswrapper[4792]: I0319 16:42:59.740950 4792 scope.go:117] "RemoveContainer" containerID="f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c" Mar 19 16:42:59 crc kubenswrapper[4792]: E0319 16:42:59.741233 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:43:00 crc kubenswrapper[4792]: I0319 16:43:00.739200 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:00 crc kubenswrapper[4792]: E0319 16:43:00.739401 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:01 crc kubenswrapper[4792]: I0319 16:43:01.739558 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:01 crc kubenswrapper[4792]: E0319 16:43:01.739708 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:01 crc kubenswrapper[4792]: I0319 16:43:01.739555 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:01 crc kubenswrapper[4792]: I0319 16:43:01.739825 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:01 crc kubenswrapper[4792]: E0319 16:43:01.739988 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:01 crc kubenswrapper[4792]: E0319 16:43:01.740050 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:02 crc kubenswrapper[4792]: I0319 16:43:02.739046 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:02 crc kubenswrapper[4792]: E0319 16:43:02.739273 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:02 crc kubenswrapper[4792]: E0319 16:43:02.831692 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:03 crc kubenswrapper[4792]: I0319 16:43:03.739277 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:03 crc kubenswrapper[4792]: I0319 16:43:03.739317 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:03 crc kubenswrapper[4792]: I0319 16:43:03.739277 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:03 crc kubenswrapper[4792]: E0319 16:43:03.739411 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:03 crc kubenswrapper[4792]: E0319 16:43:03.739591 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:03 crc kubenswrapper[4792]: E0319 16:43:03.739686 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:04 crc kubenswrapper[4792]: I0319 16:43:04.738624 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:04 crc kubenswrapper[4792]: E0319 16:43:04.738757 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:05 crc kubenswrapper[4792]: I0319 16:43:05.038277 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:05 crc kubenswrapper[4792]: E0319 16:43:05.038500 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:43:05 crc kubenswrapper[4792]: E0319 16:43:05.038657 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs podName:ab985610-78ac-44cf-a2ee-9a4a52dc431f nodeName:}" failed. No retries permitted until 2026-03-19 16:43:37.038625342 +0000 UTC m=+180.184682922 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs") pod "network-metrics-daemon-n8pzj" (UID: "ab985610-78ac-44cf-a2ee-9a4a52dc431f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:43:05 crc kubenswrapper[4792]: I0319 16:43:05.739423 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:05 crc kubenswrapper[4792]: I0319 16:43:05.739515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:05 crc kubenswrapper[4792]: I0319 16:43:05.739582 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:05 crc kubenswrapper[4792]: E0319 16:43:05.739732 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:05 crc kubenswrapper[4792]: E0319 16:43:05.740029 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:05 crc kubenswrapper[4792]: E0319 16:43:05.740162 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.323795 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.323882 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.323902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.323959 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.323977 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:06Z","lastTransitionTime":"2026-03-19T16:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:06 crc kubenswrapper[4792]: E0319 16:43:06.345606 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:06Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.350443 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.350480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.350491 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.350504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.350535 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:06Z","lastTransitionTime":"2026-03-19T16:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:06 crc kubenswrapper[4792]: E0319 16:43:06.367975 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:06Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.371927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.372037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.372050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.372061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.372096 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:06Z","lastTransitionTime":"2026-03-19T16:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:06 crc kubenswrapper[4792]: E0319 16:43:06.390113 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:06Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.394037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.394075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.394088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.394103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.394112 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:06Z","lastTransitionTime":"2026-03-19T16:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:06 crc kubenswrapper[4792]: E0319 16:43:06.412115 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:06Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.415453 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.415504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.415523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.415546 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.415564 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:06Z","lastTransitionTime":"2026-03-19T16:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:06 crc kubenswrapper[4792]: E0319 16:43:06.429294 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:06Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:06 crc kubenswrapper[4792]: E0319 16:43:06.429498 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:43:06 crc kubenswrapper[4792]: I0319 16:43:06.738909 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:06 crc kubenswrapper[4792]: E0319 16:43:06.739100 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.582781 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/0.log" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.582895 4792 generic.go:334] "Generic (PLEG): container finished" podID="c71152a8-67de-430c-a09b-1535ebc93a9a" containerID="9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0" exitCode=1 Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.582936 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vbvt5" event={"ID":"c71152a8-67de-430c-a09b-1535ebc93a9a","Type":"ContainerDied","Data":"9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0"} Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.583443 4792 scope.go:117] "RemoveContainer" containerID="9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.607506 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.622009 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.633724 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.652067 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.672298 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.688447 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.700000 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.710468 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.726079 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.738818 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.738976 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.738995 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.739005 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:07 crc kubenswrapper[4792]: E0319 16:43:07.739182 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:07 crc kubenswrapper[4792]: E0319 16:43:07.739273 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:07 crc kubenswrapper[4792]: E0319 16:43:07.739355 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.754429 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.775512 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.792783 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.804390 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.821494 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: E0319 16:43:07.832367 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.846279 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.860622 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.873810 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.890330 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.902154 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.916187 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.934629 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.948831 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.960110 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.975881 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.987754 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:07 crc kubenswrapper[4792]: I0319 16:43:07.999296 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:07Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.012366 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.026605 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.043511 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.061019 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.081379 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.098128 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.107752 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.588572 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/0.log" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.588627 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vbvt5" event={"ID":"c71152a8-67de-430c-a09b-1535ebc93a9a","Type":"ContainerStarted","Data":"7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346"} Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.610612 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.630682 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.645801 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.660091 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.673786 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.690165 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.700954 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.712878 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.725290 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.736959 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.738662 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:08 crc kubenswrapper[4792]: E0319 16:43:08.738799 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.756776 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.767803 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.780254 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.791152 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.804652 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.822535 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:08 crc kubenswrapper[4792]: I0319 16:43:08.832763 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:08Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:09 crc kubenswrapper[4792]: I0319 16:43:09.739611 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:09 crc kubenswrapper[4792]: I0319 16:43:09.739632 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:09 crc kubenswrapper[4792]: I0319 16:43:09.739640 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:09 crc kubenswrapper[4792]: E0319 16:43:09.739728 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:09 crc kubenswrapper[4792]: E0319 16:43:09.739916 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:09 crc kubenswrapper[4792]: E0319 16:43:09.739944 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:10 crc kubenswrapper[4792]: I0319 16:43:10.739382 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:10 crc kubenswrapper[4792]: E0319 16:43:10.739615 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:11 crc kubenswrapper[4792]: I0319 16:43:11.738806 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:11 crc kubenswrapper[4792]: I0319 16:43:11.738931 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:11 crc kubenswrapper[4792]: E0319 16:43:11.739053 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:11 crc kubenswrapper[4792]: I0319 16:43:11.739146 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:11 crc kubenswrapper[4792]: E0319 16:43:11.739671 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:11 crc kubenswrapper[4792]: E0319 16:43:11.739803 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:11 crc kubenswrapper[4792]: I0319 16:43:11.740270 4792 scope.go:117] "RemoveContainer" containerID="f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.604120 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/2.log" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.606943 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb"} Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.607638 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.623830 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.636059 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.652121 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.676549 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.693996 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.707102 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.717917 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.729700 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.739541 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:12 crc kubenswrapper[4792]: E0319 16:43:12.739676 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.740341 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.752592 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.768067 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.778009 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.798028 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.820278 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.833489 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: E0319 16:43:12.833551 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.844538 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:12 crc kubenswrapper[4792]: I0319 16:43:12.856083 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:12Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.612446 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/3.log" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.613153 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/2.log" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.616213 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb" exitCode=1 Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.616258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb"} Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.616295 4792 scope.go:117] "RemoveContainer" containerID="f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.616822 4792 scope.go:117] "RemoveContainer" containerID="d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb" Mar 19 16:43:13 crc kubenswrapper[4792]: E0319 16:43:13.617090 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.639498 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.656294 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.674438 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.686399 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.709210 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f270335950349a4d7b20913ed8ffb470ab314936460602044973d7e740f8910c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:42:46Z\\\",\\\"message\\\":\\\"hift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0319 16:42:46.812515 7045 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:12Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-oauth-apiserver/api LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.140\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0319 16:43:12.807493 7366 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808713 7366 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808727 7366 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0319 16:43:12.808694 7366 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.808784 7366 services_controller.go:445] Built service openshift-oauth-apiserver/api LB template configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.807609 7366 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-5tgtj\\\\nI0319 16:43:12.808826 7366 services_controller.go:451] Built service op\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.721776 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.734155 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.739320 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.739365 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:13 crc kubenswrapper[4792]: E0319 16:43:13.739439 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.739323 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:13 crc kubenswrapper[4792]: E0319 16:43:13.739629 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:13 crc kubenswrapper[4792]: E0319 16:43:13.739715 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.746582 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.760749 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.775085 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.785273 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.796515 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.807382 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.817351 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.828114 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.838743 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:13 crc kubenswrapper[4792]: I0319 16:43:13.847355 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:13Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.622405 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/3.log" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.628068 4792 scope.go:117] "RemoveContainer" containerID="d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb" Mar 19 16:43:14 crc kubenswrapper[4792]: E0319 16:43:14.628416 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.649981 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.671314 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.683194 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.705629 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.724678 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.738951 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:14 crc kubenswrapper[4792]: E0319 16:43:14.739068 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.746700 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.771251 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.801502 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:12Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-oauth-apiserver/api LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.140\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0319 16:43:12.807493 7366 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808713 7366 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808727 7366 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0319 16:43:12.808694 7366 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.808784 7366 services_controller.go:445] Built service openshift-oauth-apiserver/api LB template configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.807609 7366 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-5tgtj\\\\nI0319 16:43:12.808826 7366 services_controller.go:451] Built service op\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:43:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.815903 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.831291 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.844434 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.859551 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.873251 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.888054 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.902166 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.916751 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:14 crc kubenswrapper[4792]: I0319 16:43:14.928555 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:14Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:15 crc kubenswrapper[4792]: I0319 16:43:15.648455 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:43:15 crc kubenswrapper[4792]: I0319 16:43:15.648590 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.648740 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.648828 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:44:19.64880739 +0000 UTC m=+222.794865010 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.649003 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.649036 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.649057 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.649132 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-19 16:44:19.649110577 +0000 UTC m=+222.795168147 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:43:15 crc kubenswrapper[4792]: I0319 16:43:15.648786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:15 crc kubenswrapper[4792]: I0319 16:43:15.649214 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.649415 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.649547 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-19 16:44:19.649518458 +0000 UTC m=+222.795576028 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.649892 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:19.649825636 +0000 UTC m=+222.795883216 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:43:15 crc kubenswrapper[4792]: I0319 16:43:15.739623 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:15 crc kubenswrapper[4792]: I0319 16:43:15.739640 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:15 crc kubenswrapper[4792]: I0319 16:43:15.739737 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.739914 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.740094 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.740575 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:15 crc kubenswrapper[4792]: I0319 16:43:15.749951 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.750207 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.750236 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.750253 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:43:15 crc kubenswrapper[4792]: E0319 16:43:15.750311 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-19 16:44:19.750290846 +0000 UTC m=+222.896348416 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.511595 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.511635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.511644 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.511662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.511671 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:16Z","lastTransitionTime":"2026-03-19T16:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:16 crc kubenswrapper[4792]: E0319 16:43:16.522566 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:16Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.526660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.526859 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.526958 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.527042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.527107 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:16Z","lastTransitionTime":"2026-03-19T16:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:16 crc kubenswrapper[4792]: E0319 16:43:16.538978 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:16Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.544390 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.544465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.544490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.544520 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.544544 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:16Z","lastTransitionTime":"2026-03-19T16:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:16 crc kubenswrapper[4792]: E0319 16:43:16.563548 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:16Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.568515 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.568573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.568590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.568617 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.568636 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:16Z","lastTransitionTime":"2026-03-19T16:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:16 crc kubenswrapper[4792]: E0319 16:43:16.586245 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:16Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.591472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.591531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.591543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.591561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.591571 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:16Z","lastTransitionTime":"2026-03-19T16:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:16 crc kubenswrapper[4792]: E0319 16:43:16.604280 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:16Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:16 crc kubenswrapper[4792]: E0319 16:43:16.604392 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:43:16 crc kubenswrapper[4792]: I0319 16:43:16.739831 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:16 crc kubenswrapper[4792]: E0319 16:43:16.740189 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.739084 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.739281 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.739333 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:17 crc kubenswrapper[4792]: E0319 16:43:17.739480 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:17 crc kubenswrapper[4792]: E0319 16:43:17.741030 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:17 crc kubenswrapper[4792]: E0319 16:43:17.741495 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.763503 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.785125 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.802269 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.820203 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: E0319 16:43:17.834491 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.843190 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.864270 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.883729 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.900333 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.920674 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.939924 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:17 crc kubenswrapper[4792]: I0319 16:43:17.964759 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:18 crc kubenswrapper[4792]: I0319 16:43:17.999943 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:12Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-oauth-apiserver/api LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.140\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0319 16:43:12.807493 7366 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808713 7366 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808727 7366 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0319 16:43:12.808694 7366 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.808784 7366 services_controller.go:445] Built service openshift-oauth-apiserver/api LB template configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.807609 7366 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-5tgtj\\\\nI0319 16:43:12.808826 7366 services_controller.go:451] Built service op\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:43:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:17Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:18 crc kubenswrapper[4792]: I0319 16:43:18.019199 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:18Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:18 crc kubenswrapper[4792]: I0319 16:43:18.042668 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:18Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:18 crc kubenswrapper[4792]: I0319 16:43:18.063699 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:18Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:18 crc kubenswrapper[4792]: I0319 16:43:18.080020 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:18Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:18 crc kubenswrapper[4792]: I0319 16:43:18.096109 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:18Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:18 crc kubenswrapper[4792]: I0319 16:43:18.738799 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:18 crc kubenswrapper[4792]: E0319 16:43:18.739025 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:19 crc kubenswrapper[4792]: I0319 16:43:19.739917 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:19 crc kubenswrapper[4792]: I0319 16:43:19.739975 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:19 crc kubenswrapper[4792]: I0319 16:43:19.739930 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:19 crc kubenswrapper[4792]: E0319 16:43:19.740068 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:19 crc kubenswrapper[4792]: E0319 16:43:19.740180 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:19 crc kubenswrapper[4792]: E0319 16:43:19.740253 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:20 crc kubenswrapper[4792]: I0319 16:43:20.738984 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:20 crc kubenswrapper[4792]: E0319 16:43:20.739708 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:21 crc kubenswrapper[4792]: I0319 16:43:21.739063 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:21 crc kubenswrapper[4792]: I0319 16:43:21.739063 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:21 crc kubenswrapper[4792]: I0319 16:43:21.739180 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:21 crc kubenswrapper[4792]: E0319 16:43:21.739899 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:21 crc kubenswrapper[4792]: E0319 16:43:21.739985 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:21 crc kubenswrapper[4792]: E0319 16:43:21.739765 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:22 crc kubenswrapper[4792]: I0319 16:43:22.738998 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:22 crc kubenswrapper[4792]: E0319 16:43:22.739159 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:22 crc kubenswrapper[4792]: E0319 16:43:22.835921 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:23 crc kubenswrapper[4792]: I0319 16:43:23.739120 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:23 crc kubenswrapper[4792]: I0319 16:43:23.739817 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:23 crc kubenswrapper[4792]: E0319 16:43:23.739969 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:23 crc kubenswrapper[4792]: I0319 16:43:23.740243 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:23 crc kubenswrapper[4792]: E0319 16:43:23.740289 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:23 crc kubenswrapper[4792]: E0319 16:43:23.740400 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:23 crc kubenswrapper[4792]: I0319 16:43:23.757700 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 19 16:43:24 crc kubenswrapper[4792]: I0319 16:43:24.738661 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:24 crc kubenswrapper[4792]: E0319 16:43:24.739198 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:24 crc kubenswrapper[4792]: I0319 16:43:24.756457 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 19 16:43:25 crc kubenswrapper[4792]: I0319 16:43:25.739167 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:25 crc kubenswrapper[4792]: E0319 16:43:25.739669 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:25 crc kubenswrapper[4792]: I0319 16:43:25.739370 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:25 crc kubenswrapper[4792]: I0319 16:43:25.739162 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:25 crc kubenswrapper[4792]: E0319 16:43:25.739910 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:25 crc kubenswrapper[4792]: E0319 16:43:25.740148 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.738896 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:26 crc kubenswrapper[4792]: E0319 16:43:26.739125 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.740467 4792 scope.go:117] "RemoveContainer" containerID="d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb" Mar 19 16:43:26 crc kubenswrapper[4792]: E0319 16:43:26.740737 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.971425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.971494 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.971514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.971536 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.971556 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:26Z","lastTransitionTime":"2026-03-19T16:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:26 crc kubenswrapper[4792]: E0319 16:43:26.987177 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:26Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.993178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.993244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.993263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.993289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:26 crc kubenswrapper[4792]: I0319 16:43:26.993309 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:26Z","lastTransitionTime":"2026-03-19T16:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.013820 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.018802 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.018885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.018906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.018934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.018952 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:27Z","lastTransitionTime":"2026-03-19T16:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.039769 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.044790 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.044952 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.044970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.044993 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.045005 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:27Z","lastTransitionTime":"2026-03-19T16:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.065107 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.069311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.069341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.069354 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.069370 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.069383 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:27Z","lastTransitionTime":"2026-03-19T16:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.087921 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.088062 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.738749 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.738886 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.739127 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.739160 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.739241 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.739373 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.755763 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.776758 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.793908 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.811197 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.824100 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: E0319 16:43:27.836542 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.840498 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:12Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-oauth-apiserver/api LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.140\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0319 16:43:12.807493 7366 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808713 7366 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808727 7366 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0319 16:43:12.808694 7366 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.808784 7366 services_controller.go:445] Built service openshift-oauth-apiserver/api LB template configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.807609 7366 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-5tgtj\\\\nI0319 16:43:12.808826 7366 services_controller.go:451] Built service op\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:43:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.852864 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.875448 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6883fc-c163-4bd3-86ee-311ea3247274\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4484b69c21312b3be2793b1f0146c97f3f220990f691bf5c72e7040301fc48ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee61d5a25a853c60c381c2487199133c54a0b638b86ff36de00eacea7223c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c729a4859bb54286f01a9a840f4227fcc3910f3e44c2967be4fec08e10f679f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4204b2817a908e1c39c947a4d0aaf4bc4b190069f888f806252005106307a1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a971fc04afd85666fdb8896af31c8843b71c2874dce9926e30412bbf930abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e82ecffe8f089bae9d973879c14aa87343495349cb411df59233ca83ffdd81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e82ecffe8f089bae9d973879c14aa87343495349cb411df59233ca83ffdd81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b405fc2d925ac270902be910042efe87f3e47c5cce752bb04b57484aff17b9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b405fc2d925ac270902be910042efe87f3e47c5cce752bb04b57484aff17b9b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3f36b82b5e953ac3c8806e9565a36f3487d13cd94b1601282b7ba0de2f789232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f36b82b5e953ac3c8806e9565a36f3487d13cd94b1601282b7ba0de2f789232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.891193 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.904406 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.917853 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.930268 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.941998 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.953021 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.963809 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.976219 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.986677 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e70ea009-ccb4-4a64-9c62-cc939c74ffaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ffb102ca0b970820f5e520f2d4fd3c26e58dbeaff44ce5485d0269440f0071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f70560a54e70117810af4aa326e1649aac3f866ccd75f9a467d6355f67fa4f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f70560a54e70117810af4aa326e1649aac3f866ccd75f9a467d6355f67fa4f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:27 crc kubenswrapper[4792]: I0319 16:43:27.998803 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:27Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:28 crc kubenswrapper[4792]: I0319 16:43:28.011457 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:28Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:28 crc kubenswrapper[4792]: I0319 16:43:28.738678 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:28 crc kubenswrapper[4792]: E0319 16:43:28.739219 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:29 crc kubenswrapper[4792]: I0319 16:43:29.739017 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:29 crc kubenswrapper[4792]: E0319 16:43:29.739134 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:29 crc kubenswrapper[4792]: I0319 16:43:29.739033 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:29 crc kubenswrapper[4792]: I0319 16:43:29.739183 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:29 crc kubenswrapper[4792]: E0319 16:43:29.739366 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:29 crc kubenswrapper[4792]: E0319 16:43:29.739430 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:30 crc kubenswrapper[4792]: I0319 16:43:30.739466 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:30 crc kubenswrapper[4792]: E0319 16:43:30.740237 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:31 crc kubenswrapper[4792]: I0319 16:43:31.738734 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:31 crc kubenswrapper[4792]: E0319 16:43:31.738987 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:31 crc kubenswrapper[4792]: I0319 16:43:31.739134 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:31 crc kubenswrapper[4792]: I0319 16:43:31.739172 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:31 crc kubenswrapper[4792]: E0319 16:43:31.739378 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:31 crc kubenswrapper[4792]: E0319 16:43:31.739671 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:32 crc kubenswrapper[4792]: I0319 16:43:32.738708 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:32 crc kubenswrapper[4792]: E0319 16:43:32.738952 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:32 crc kubenswrapper[4792]: E0319 16:43:32.838129 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:33 crc kubenswrapper[4792]: I0319 16:43:33.738978 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:33 crc kubenswrapper[4792]: I0319 16:43:33.739023 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:33 crc kubenswrapper[4792]: I0319 16:43:33.739090 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:33 crc kubenswrapper[4792]: E0319 16:43:33.740279 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:33 crc kubenswrapper[4792]: E0319 16:43:33.740378 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:33 crc kubenswrapper[4792]: E0319 16:43:33.740671 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:34 crc kubenswrapper[4792]: I0319 16:43:34.738886 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:34 crc kubenswrapper[4792]: E0319 16:43:34.739064 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:35 crc kubenswrapper[4792]: I0319 16:43:35.739057 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:35 crc kubenswrapper[4792]: I0319 16:43:35.739101 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:35 crc kubenswrapper[4792]: I0319 16:43:35.739310 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:35 crc kubenswrapper[4792]: E0319 16:43:35.739428 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:35 crc kubenswrapper[4792]: E0319 16:43:35.739530 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:35 crc kubenswrapper[4792]: E0319 16:43:35.739594 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:36 crc kubenswrapper[4792]: I0319 16:43:36.739510 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:36 crc kubenswrapper[4792]: E0319 16:43:36.739790 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.100300 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.100483 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.100588 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs podName:ab985610-78ac-44cf-a2ee-9a4a52dc431f nodeName:}" failed. No retries permitted until 2026-03-19 16:44:41.100561283 +0000 UTC m=+244.246618863 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs") pod "network-metrics-daemon-n8pzj" (UID: "ab985610-78ac-44cf-a2ee-9a4a52dc431f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.393638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.393705 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.393723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.393748 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.393768 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:37Z","lastTransitionTime":"2026-03-19T16:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.414936 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.421161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.421218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.421236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.421259 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.421275 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:37Z","lastTransitionTime":"2026-03-19T16:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.440239 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.445007 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.445045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.445059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.445076 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.445089 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:37Z","lastTransitionTime":"2026-03-19T16:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.460013 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.468546 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.469061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.469292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.469522 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.469740 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:37Z","lastTransitionTime":"2026-03-19T16:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.490886 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.495977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.496036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.496056 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.496080 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.496097 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:37Z","lastTransitionTime":"2026-03-19T16:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.515782 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1791f1-2726-4ce2-afce-76ff4bb66f00\\\",\\\"systemUUID\\\":\\\"acd41293-7f2d-450a-aedf-420c50056810\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.516059 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.739087 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.739090 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.739106 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.739333 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.739437 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.739688 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.756441 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0319 16:41:35.446022 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0319 16:41:35.446183 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0319 16:41:35.446782 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2926464176/tls.crt::/tmp/serving-cert-2926464176/tls.key\\\\\\\"\\\\nI0319 16:41:35.725563 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0319 16:41:35.737901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0319 16:41:35.737944 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0319 16:41:35.737984 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0319 16:41:35.737995 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0319 16:41:35.762851 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0319 16:41:35.763096 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763135 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0319 16:41:35.763166 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0319 16:41:35.763194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0319 16:41:35.763227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0319 16:41:35.763263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0319 16:41:35.762937 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0319 16:41:35.764721 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.770716 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a368c72-996e-4f74-b41b-197cc7e5cafb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244e3e3e017f42181c0fd9893903c24c891c161768a10fefe4d9c201228bdb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f10d929314dbc5129698d54d1587a95505492d3852677f3c92405d92a239ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a466c62e2ded8baa51645ce72536822c1dd1f98dca8d002a65a941dc059733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.789062 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932b6a8c69eb6f7ba2212385f9965824d805211d32b071ba5f470ad7cc9d8c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.812480 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vbvt5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71152a8-67de-430c-a09b-1535ebc93a9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:43:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:06Z\\\",\\\"message\\\":\\\"2026-03-19T16:42:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a\\\\n2026-03-19T16:42:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_90dc3b2c-577c-449a-8613-2664bf925d3a to /host/opt/cni/bin/\\\\n2026-03-19T16:42:21Z [verbose] multus-daemon started\\\\n2026-03-19T16:42:21Z [verbose] Readiness Indicator file check\\\\n2026-03-19T16:43:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:43:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lkt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vbvt5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.827421 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e70ea009-ccb4-4a64-9c62-cc939c74ffaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ffb102ca0b970820f5e520f2d4fd3c26e58dbeaff44ce5485d0269440f0071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f70560a54e70117810af4aa326e1649aac3f866ccd75f9a467d6355f67fa4f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f70560a54e70117810af4aa326e1649aac3f866ccd75f9a467d6355f67fa4f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: E0319 16:43:37.839585 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.844339 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e363249d-8c68-4e6f-9fa6-5714ef765097\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fdf4063f946457db31d9b14472b72885e7f91c7dce545e1f5ffdf9da19c2837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f04076543b3e11b142c5b72f1ac235cbf4ab575b51c79ce98ca37dfc176d143\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-19T16:41:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0319 16:40:39.771132 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0319 16:40:39.772947 1 observer_polling.go:159] Starting file observer\\\\nI0319 16:40:39.805801 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0319 16:40:39.811389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0319 16:41:02.892223 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0319 16:41:02.892347 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:41:02Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f162463d9042f4c503027ca2fe1fc4a6100961c3f235609786cf80df0a56951\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de5ec9bd5b88c2bbbdd10af08a2ddf591e88129550a2621d2cb6e4249d0fad2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.858355 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.867815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfhg9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efa5c7e-77e2-464b-9a81-cc95b1fe63d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87832a59ac3f4bf0ef6e496cfed4a7418fa71dda461d9ea1c3deefa98f992277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-754jf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfhg9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.880092 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c52ab600-6188-4491-9186-622991c75340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8153d25b3de534b61517d151e53ac903fe8404d3f6d784612bc1d2013a4e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e83ef9dd2518aac59e772825875f2c23d2734937ad1d73a4225169915ca75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j25gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q4gqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.895738 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f050af81ff2ec1186f5b52d0edd8e1e7df7d9796ee9d4e434a1920ff41230c6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.911706 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9143c54bbf0354a2c1c674aede09ebd1d73bef927e5d1b715de557ff5a4fb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e8cb513e277a6e480268b8c180d6d2ddcc6b345bb861cf22c494c1adfbda80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.930525 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.941244 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvfx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812ae5e5-a1ff-42ef-b120-95b6f3a18957\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0022223dc462534c32e1e84e10fc64345f8b50489f22e070b787a4cfbb017d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkhqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvfx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.961595 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6883fc-c163-4bd3-86ee-311ea3247274\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:41:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4484b69c21312b3be2793b1f0146c97f3f220990f691bf5c72e7040301fc48ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee61d5a25a853c60c381c2487199133c54a0b638b86ff36de00eacea7223c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c729a4859bb54286f01a9a840f4227fcc3910f3e44c2967be4fec08e10f679f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4204b2817a908e1c39c947a4d0aaf4bc4b190069f888f806252005106307a1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a971fc04afd85666fdb8896af31c8843b71c2874dce9926e30412bbf930abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e82ecffe8f089bae9d973879c14aa87343495349cb411df59233ca83ffdd81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e82ecffe8f089bae9d973879c14aa87343495349cb411df59233ca83ffdd81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b405fc2d925ac270902be910042efe87f3e47c5cce752bb04b57484aff17b9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b405fc2d925ac270902be910042efe87f3e47c5cce752bb04b57484aff17b9b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3f36b82b5e953ac3c8806e9565a36f3487d13cd94b1601282b7ba0de2f789232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f36b82b5e953ac3c8806e9565a36f3487d13cd94b1601282b7ba0de2f789232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:40:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.974939 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:37 crc kubenswrapper[4792]: I0319 16:43:37.989248 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9e72e9a-50c3-41db-8657-7ae683c7c13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a45c804b6a15d3c277513321c42fd2be85b7dc3ce95968ce2b1e185b5efbc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szhln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:37Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:38 crc kubenswrapper[4792]: I0319 16:43:38.003716 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"159854fb-4797-4205-a888-ff4ae76d14e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://815bc2843d60d56569db84f8dddf94edf834830327040d8e028c4fe2ae773c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d851f37eeda7ad5c872d9c495717e417c4f4ff150388481eba90049b3cdb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9618b2d8ea6e70a18c7f56d8e76143844379888b1711ce9045dbbad3005c12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a63352b965c73805f9d763b60ec6b0828be631c93cb596e5601d98bbce9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe412683819033bb9687165792ec209413dd7a8f005481dd34731b836c2023a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47db1b368627322d72808cbc16329402b648fd0fde8c058ce7f6d849ecc4c8bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761ab3f56d41c208bec117be32f94870e1db3d2a9a1a78323711588e3932d104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6g7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mhtlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:38Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:38 crc kubenswrapper[4792]: I0319 16:43:38.027169 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8705e1c9-d503-400f-93b0-b04ce7083d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-19T16:43:12Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-oauth-apiserver/api LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.140\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0319 16:43:12.807493 7366 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808713 7366 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0319 16:43:12.808727 7366 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0319 16:43:12.808694 7366 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.808784 7366 services_controller.go:445] Built service openshift-oauth-apiserver/api LB template configs for network=default: []services.lbConfig(nil)\\\\nI0319 16:43:12.807609 7366 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-5tgtj\\\\nI0319 16:43:12.808826 7366 services_controller.go:451] Built service op\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-19T16:43:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-19T16:42:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-19T16:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-19T16:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9w4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5tgtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:38Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:38 crc kubenswrapper[4792]: I0319 16:43:38.040053 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab985610-78ac-44cf-a2ee-9a4a52dc431f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-19T16:42:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vfs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-19T16:42:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n8pzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-19T16:43:38Z is after 2025-08-24T17:21:41Z" Mar 19 16:43:38 crc kubenswrapper[4792]: I0319 16:43:38.738934 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:38 crc kubenswrapper[4792]: E0319 16:43:38.739219 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:39 crc kubenswrapper[4792]: I0319 16:43:39.739269 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:39 crc kubenswrapper[4792]: E0319 16:43:39.739448 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:39 crc kubenswrapper[4792]: I0319 16:43:39.739759 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:39 crc kubenswrapper[4792]: E0319 16:43:39.739872 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:39 crc kubenswrapper[4792]: I0319 16:43:39.740162 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:39 crc kubenswrapper[4792]: E0319 16:43:39.740387 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:40 crc kubenswrapper[4792]: I0319 16:43:40.739299 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:40 crc kubenswrapper[4792]: E0319 16:43:40.739425 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:40 crc kubenswrapper[4792]: I0319 16:43:40.740235 4792 scope.go:117] "RemoveContainer" containerID="d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb" Mar 19 16:43:40 crc kubenswrapper[4792]: E0319 16:43:40.740513 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:43:41 crc kubenswrapper[4792]: I0319 16:43:41.739215 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:41 crc kubenswrapper[4792]: I0319 16:43:41.739254 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:41 crc kubenswrapper[4792]: E0319 16:43:41.739470 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:41 crc kubenswrapper[4792]: E0319 16:43:41.739705 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:41 crc kubenswrapper[4792]: I0319 16:43:41.740034 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:41 crc kubenswrapper[4792]: E0319 16:43:41.740170 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:42 crc kubenswrapper[4792]: I0319 16:43:42.739131 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:42 crc kubenswrapper[4792]: E0319 16:43:42.739347 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:42 crc kubenswrapper[4792]: E0319 16:43:42.841499 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:43 crc kubenswrapper[4792]: I0319 16:43:43.739121 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:43 crc kubenswrapper[4792]: I0319 16:43:43.739150 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:43 crc kubenswrapper[4792]: I0319 16:43:43.739219 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:43 crc kubenswrapper[4792]: E0319 16:43:43.739223 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:43 crc kubenswrapper[4792]: E0319 16:43:43.739294 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:43 crc kubenswrapper[4792]: E0319 16:43:43.739633 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:44 crc kubenswrapper[4792]: I0319 16:43:44.738618 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:44 crc kubenswrapper[4792]: E0319 16:43:44.738791 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:45 crc kubenswrapper[4792]: I0319 16:43:45.739651 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:45 crc kubenswrapper[4792]: I0319 16:43:45.739702 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:45 crc kubenswrapper[4792]: E0319 16:43:45.739908 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:45 crc kubenswrapper[4792]: I0319 16:43:45.739707 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:45 crc kubenswrapper[4792]: E0319 16:43:45.740030 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:45 crc kubenswrapper[4792]: E0319 16:43:45.740209 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:46 crc kubenswrapper[4792]: I0319 16:43:46.739035 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:46 crc kubenswrapper[4792]: E0319 16:43:46.739461 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.701188 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.701266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.701292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.701323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.701345 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-19T16:43:47Z","lastTransitionTime":"2026-03-19T16:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.739128 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.739154 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.739300 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:47 crc kubenswrapper[4792]: E0319 16:43:47.739681 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:47 crc kubenswrapper[4792]: E0319 16:43:47.739922 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:47 crc kubenswrapper[4792]: E0319 16:43:47.740113 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.780530 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh"] Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.781107 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.782978 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.783923 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.784611 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.785985 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.795049 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.809618 4792 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.823492 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cvfx6" podStartSLOduration=135.823468416 podStartE2EDuration="2m15.823468416s" podCreationTimestamp="2026-03-19 16:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:47.793778833 +0000 UTC m=+190.939836413" watchObservedRunningTime="2026-03-19 16:43:47.823468416 +0000 UTC m=+190.969525996" Mar 19 16:43:47 crc kubenswrapper[4792]: E0319 16:43:47.842356 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.846419 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea84d206-5e31-4dc1-9170-a389dae26e7b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.846533 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea84d206-5e31-4dc1-9170-a389dae26e7b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.846625 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ea84d206-5e31-4dc1-9170-a389dae26e7b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.846700 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea84d206-5e31-4dc1-9170-a389dae26e7b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.846900 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ea84d206-5e31-4dc1-9170-a389dae26e7b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.901382 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podStartSLOduration=134.901363719 podStartE2EDuration="2m14.901363719s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:47.876644973 +0000 UTC m=+191.022702553" watchObservedRunningTime="2026-03-19 16:43:47.901363719 +0000 UTC m=+191.047421259" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.901781 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mhtlt" podStartSLOduration=134.901777181 podStartE2EDuration="2m14.901777181s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:47.901550995 +0000 UTC m=+191.047608555" watchObservedRunningTime="2026-03-19 16:43:47.901777181 +0000 UTC m=+191.047834721" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.948047 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ea84d206-5e31-4dc1-9170-a389dae26e7b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.948094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea84d206-5e31-4dc1-9170-a389dae26e7b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.948135 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ea84d206-5e31-4dc1-9170-a389dae26e7b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.948156 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea84d206-5e31-4dc1-9170-a389dae26e7b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.948172 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea84d206-5e31-4dc1-9170-a389dae26e7b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.948187 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ea84d206-5e31-4dc1-9170-a389dae26e7b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.948233 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ea84d206-5e31-4dc1-9170-a389dae26e7b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.948933 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea84d206-5e31-4dc1-9170-a389dae26e7b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.952877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea84d206-5e31-4dc1-9170-a389dae26e7b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.965459 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=23.965446544 podStartE2EDuration="23.965446544s" podCreationTimestamp="2026-03-19 16:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:47.964798367 +0000 UTC m=+191.110855907" watchObservedRunningTime="2026-03-19 16:43:47.965446544 +0000 UTC m=+191.111504074" Mar 19 16:43:47 crc kubenswrapper[4792]: I0319 16:43:47.970551 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea84d206-5e31-4dc1-9170-a389dae26e7b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-scpvh\" (UID: \"ea84d206-5e31-4dc1-9170-a389dae26e7b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.003077 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vbvt5" podStartSLOduration=135.003058254 podStartE2EDuration="2m15.003058254s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:48.002608842 +0000 UTC m=+191.148666382" watchObservedRunningTime="2026-03-19 16:43:48.003058254 +0000 UTC m=+191.149115794" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.026343 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.026324742 podStartE2EDuration="56.026324742s" podCreationTimestamp="2026-03-19 16:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:48.026223009 +0000 UTC m=+191.172280549" watchObservedRunningTime="2026-03-19 16:43:48.026324742 +0000 UTC m=+191.172382282" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.026433 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.026428924 podStartE2EDuration="1m32.026428924s" podCreationTimestamp="2026-03-19 16:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:48.016286887 +0000 UTC m=+191.162344427" watchObservedRunningTime="2026-03-19 16:43:48.026428924 +0000 UTC m=+191.172486464" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.050056 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gfhg9" podStartSLOduration=136.050035751 podStartE2EDuration="2m16.050035751s" podCreationTimestamp="2026-03-19 16:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:48.049028693 +0000 UTC m=+191.195086253" watchObservedRunningTime="2026-03-19 16:43:48.050035751 +0000 UTC m=+191.196093311" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.073248 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q4gqr" podStartSLOduration=135.073228136 podStartE2EDuration="2m15.073228136s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:48.060913048 +0000 UTC m=+191.206970598" watchObservedRunningTime="2026-03-19 16:43:48.073228136 +0000 UTC m=+191.219285686" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.087764 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.087748174 podStartE2EDuration="25.087748174s" podCreationTimestamp="2026-03-19 16:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:48.074276885 +0000 UTC m=+191.220334445" watchObservedRunningTime="2026-03-19 16:43:48.087748174 +0000 UTC m=+191.233805724" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.088116 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=61.088107413 podStartE2EDuration="1m1.088107413s" podCreationTimestamp="2026-03-19 16:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:48.087206948 +0000 UTC m=+191.233264488" watchObservedRunningTime="2026-03-19 16:43:48.088107413 +0000 UTC m=+191.234164973" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.103472 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.739611 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:48 crc kubenswrapper[4792]: E0319 16:43:48.740098 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.754310 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" event={"ID":"ea84d206-5e31-4dc1-9170-a389dae26e7b","Type":"ContainerStarted","Data":"fbc6d85e2a16f1eaede574eb36d3967b4995ec2d3f309278bea3fe06169f1bf4"} Mar 19 16:43:48 crc kubenswrapper[4792]: I0319 16:43:48.754370 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" event={"ID":"ea84d206-5e31-4dc1-9170-a389dae26e7b","Type":"ContainerStarted","Data":"c9760c80dac4f4726ccee1508197260d1ab680773d68c9b12cd06b1e2267c220"} Mar 19 16:43:49 crc kubenswrapper[4792]: I0319 16:43:49.738862 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:49 crc kubenswrapper[4792]: I0319 16:43:49.738938 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:49 crc kubenswrapper[4792]: E0319 16:43:49.738982 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:49 crc kubenswrapper[4792]: I0319 16:43:49.739030 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:49 crc kubenswrapper[4792]: E0319 16:43:49.739078 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:49 crc kubenswrapper[4792]: E0319 16:43:49.739165 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:50 crc kubenswrapper[4792]: I0319 16:43:50.738948 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:50 crc kubenswrapper[4792]: E0319 16:43:50.739199 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:51 crc kubenswrapper[4792]: I0319 16:43:51.739026 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:51 crc kubenswrapper[4792]: I0319 16:43:51.739057 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:51 crc kubenswrapper[4792]: E0319 16:43:51.739197 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:51 crc kubenswrapper[4792]: I0319 16:43:51.739030 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:51 crc kubenswrapper[4792]: E0319 16:43:51.739304 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:51 crc kubenswrapper[4792]: E0319 16:43:51.739402 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:52 crc kubenswrapper[4792]: I0319 16:43:52.738780 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:52 crc kubenswrapper[4792]: E0319 16:43:52.738993 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:52 crc kubenswrapper[4792]: I0319 16:43:52.740386 4792 scope.go:117] "RemoveContainer" containerID="d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb" Mar 19 16:43:52 crc kubenswrapper[4792]: E0319 16:43:52.740918 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5tgtj_openshift-ovn-kubernetes(8705e1c9-d503-400f-93b0-b04ce7083d7a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" Mar 19 16:43:52 crc kubenswrapper[4792]: E0319 16:43:52.844170 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.738768 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.738871 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.738937 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:53 crc kubenswrapper[4792]: E0319 16:43:53.739069 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:53 crc kubenswrapper[4792]: E0319 16:43:53.739219 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:53 crc kubenswrapper[4792]: E0319 16:43:53.739353 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.772901 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/1.log" Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.773728 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/0.log" Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.773781 4792 generic.go:334] "Generic (PLEG): container finished" podID="c71152a8-67de-430c-a09b-1535ebc93a9a" containerID="7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346" exitCode=1 Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.773813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vbvt5" event={"ID":"c71152a8-67de-430c-a09b-1535ebc93a9a","Type":"ContainerDied","Data":"7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346"} Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.773881 4792 scope.go:117] "RemoveContainer" containerID="9b0885445ed0c0710e6034fd4a3446c700f63692d163b2bd776bc1dc999c78a0" Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.774242 4792 scope.go:117] "RemoveContainer" containerID="7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346" Mar 19 16:43:53 crc kubenswrapper[4792]: E0319 16:43:53.774392 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-vbvt5_openshift-multus(c71152a8-67de-430c-a09b-1535ebc93a9a)\"" pod="openshift-multus/multus-vbvt5" podUID="c71152a8-67de-430c-a09b-1535ebc93a9a" Mar 19 16:43:53 crc kubenswrapper[4792]: I0319 16:43:53.800362 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-scpvh" podStartSLOduration=140.800317483 podStartE2EDuration="2m20.800317483s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:43:48.776768293 +0000 UTC m=+191.922825893" watchObservedRunningTime="2026-03-19 16:43:53.800317483 +0000 UTC m=+196.946375023" Mar 19 16:43:54 crc kubenswrapper[4792]: I0319 16:43:54.738799 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:54 crc kubenswrapper[4792]: E0319 16:43:54.739026 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:54 crc kubenswrapper[4792]: I0319 16:43:54.780393 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/1.log" Mar 19 16:43:55 crc kubenswrapper[4792]: I0319 16:43:55.739710 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:55 crc kubenswrapper[4792]: I0319 16:43:55.739761 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:55 crc kubenswrapper[4792]: E0319 16:43:55.740064 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:55 crc kubenswrapper[4792]: E0319 16:43:55.740312 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:55 crc kubenswrapper[4792]: I0319 16:43:55.740604 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:55 crc kubenswrapper[4792]: E0319 16:43:55.740767 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:56 crc kubenswrapper[4792]: I0319 16:43:56.739446 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:56 crc kubenswrapper[4792]: E0319 16:43:56.739638 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:57 crc kubenswrapper[4792]: I0319 16:43:57.739183 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:57 crc kubenswrapper[4792]: I0319 16:43:57.739256 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:57 crc kubenswrapper[4792]: I0319 16:43:57.739310 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:57 crc kubenswrapper[4792]: E0319 16:43:57.741229 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:57 crc kubenswrapper[4792]: E0319 16:43:57.741558 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:57 crc kubenswrapper[4792]: E0319 16:43:57.741722 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:43:57 crc kubenswrapper[4792]: E0319 16:43:57.844980 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:43:58 crc kubenswrapper[4792]: I0319 16:43:58.738707 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:43:58 crc kubenswrapper[4792]: E0319 16:43:58.738942 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:43:59 crc kubenswrapper[4792]: I0319 16:43:59.739267 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:43:59 crc kubenswrapper[4792]: E0319 16:43:59.739490 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:43:59 crc kubenswrapper[4792]: I0319 16:43:59.739793 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:43:59 crc kubenswrapper[4792]: E0319 16:43:59.739915 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:43:59 crc kubenswrapper[4792]: I0319 16:43:59.740270 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:43:59 crc kubenswrapper[4792]: E0319 16:43:59.740465 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:44:00 crc kubenswrapper[4792]: I0319 16:44:00.739141 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:00 crc kubenswrapper[4792]: E0319 16:44:00.739324 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:44:01 crc kubenswrapper[4792]: I0319 16:44:01.738634 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:01 crc kubenswrapper[4792]: I0319 16:44:01.738696 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:01 crc kubenswrapper[4792]: E0319 16:44:01.738815 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:44:01 crc kubenswrapper[4792]: I0319 16:44:01.738899 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:01 crc kubenswrapper[4792]: E0319 16:44:01.739032 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:44:01 crc kubenswrapper[4792]: E0319 16:44:01.739134 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:44:02 crc kubenswrapper[4792]: I0319 16:44:02.739055 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:02 crc kubenswrapper[4792]: E0319 16:44:02.739275 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:44:02 crc kubenswrapper[4792]: E0319 16:44:02.846481 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:44:03 crc kubenswrapper[4792]: I0319 16:44:03.739460 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:03 crc kubenswrapper[4792]: I0319 16:44:03.739561 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:03 crc kubenswrapper[4792]: I0319 16:44:03.739666 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:03 crc kubenswrapper[4792]: E0319 16:44:03.739661 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:44:03 crc kubenswrapper[4792]: E0319 16:44:03.739791 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:44:03 crc kubenswrapper[4792]: E0319 16:44:03.740065 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:44:04 crc kubenswrapper[4792]: I0319 16:44:04.738639 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:04 crc kubenswrapper[4792]: E0319 16:44:04.738869 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:44:05 crc kubenswrapper[4792]: I0319 16:44:05.739335 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:05 crc kubenswrapper[4792]: I0319 16:44:05.739472 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:05 crc kubenswrapper[4792]: E0319 16:44:05.739661 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:44:05 crc kubenswrapper[4792]: I0319 16:44:05.739730 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:05 crc kubenswrapper[4792]: E0319 16:44:05.739863 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:44:05 crc kubenswrapper[4792]: E0319 16:44:05.739941 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:44:05 crc kubenswrapper[4792]: I0319 16:44:05.741371 4792 scope.go:117] "RemoveContainer" containerID="d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb" Mar 19 16:44:06 crc kubenswrapper[4792]: I0319 16:44:06.739237 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:06 crc kubenswrapper[4792]: E0319 16:44:06.739364 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:44:06 crc kubenswrapper[4792]: I0319 16:44:06.778883 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n8pzj"] Mar 19 16:44:06 crc kubenswrapper[4792]: I0319 16:44:06.830290 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/3.log" Mar 19 16:44:06 crc kubenswrapper[4792]: I0319 16:44:06.833213 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:06 crc kubenswrapper[4792]: E0319 16:44:06.833358 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:44:06 crc kubenswrapper[4792]: I0319 16:44:06.833762 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerStarted","Data":"724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d"} Mar 19 16:44:06 crc kubenswrapper[4792]: I0319 16:44:06.834919 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:44:07 crc kubenswrapper[4792]: I0319 16:44:07.739318 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:07 crc kubenswrapper[4792]: I0319 16:44:07.739380 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:07 crc kubenswrapper[4792]: I0319 16:44:07.739342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:07 crc kubenswrapper[4792]: E0319 16:44:07.741808 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:44:07 crc kubenswrapper[4792]: E0319 16:44:07.741987 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:44:07 crc kubenswrapper[4792]: E0319 16:44:07.742092 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:44:07 crc kubenswrapper[4792]: E0319 16:44:07.847295 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:44:08 crc kubenswrapper[4792]: I0319 16:44:08.738632 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:08 crc kubenswrapper[4792]: E0319 16:44:08.738746 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:44:08 crc kubenswrapper[4792]: I0319 16:44:08.739268 4792 scope.go:117] "RemoveContainer" containerID="7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346" Mar 19 16:44:08 crc kubenswrapper[4792]: I0319 16:44:08.758327 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podStartSLOduration=155.758308146 podStartE2EDuration="2m35.758308146s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:06.861265856 +0000 UTC m=+210.007323426" watchObservedRunningTime="2026-03-19 16:44:08.758308146 +0000 UTC m=+211.904365696" Mar 19 16:44:09 crc kubenswrapper[4792]: I0319 16:44:09.738640 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:09 crc kubenswrapper[4792]: I0319 16:44:09.738692 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:09 crc kubenswrapper[4792]: E0319 16:44:09.738831 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:44:09 crc kubenswrapper[4792]: E0319 16:44:09.738964 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:44:09 crc kubenswrapper[4792]: I0319 16:44:09.739064 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:09 crc kubenswrapper[4792]: E0319 16:44:09.739262 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:44:09 crc kubenswrapper[4792]: I0319 16:44:09.842115 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/1.log" Mar 19 16:44:09 crc kubenswrapper[4792]: I0319 16:44:09.842193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vbvt5" event={"ID":"c71152a8-67de-430c-a09b-1535ebc93a9a","Type":"ContainerStarted","Data":"b5e0d4ec4f9a1d5f231a3612390ec1ef817e343e78cf509fe505125639449d7a"} Mar 19 16:44:10 crc kubenswrapper[4792]: I0319 16:44:10.738890 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:10 crc kubenswrapper[4792]: E0319 16:44:10.739417 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:44:11 crc kubenswrapper[4792]: I0319 16:44:11.739570 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:11 crc kubenswrapper[4792]: I0319 16:44:11.739674 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:11 crc kubenswrapper[4792]: E0319 16:44:11.739704 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 19 16:44:11 crc kubenswrapper[4792]: I0319 16:44:11.739577 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:11 crc kubenswrapper[4792]: E0319 16:44:11.739917 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 19 16:44:11 crc kubenswrapper[4792]: E0319 16:44:11.740002 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 19 16:44:12 crc kubenswrapper[4792]: I0319 16:44:12.739374 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:12 crc kubenswrapper[4792]: E0319 16:44:12.739529 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n8pzj" podUID="ab985610-78ac-44cf-a2ee-9a4a52dc431f" Mar 19 16:44:13 crc kubenswrapper[4792]: I0319 16:44:13.739565 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:13 crc kubenswrapper[4792]: I0319 16:44:13.739594 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:13 crc kubenswrapper[4792]: I0319 16:44:13.740058 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:13 crc kubenswrapper[4792]: I0319 16:44:13.751903 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 19 16:44:13 crc kubenswrapper[4792]: I0319 16:44:13.751969 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 16:44:13 crc kubenswrapper[4792]: I0319 16:44:13.752082 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 16:44:13 crc kubenswrapper[4792]: I0319 16:44:13.752644 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 19 16:44:14 crc kubenswrapper[4792]: I0319 16:44:14.738975 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:14 crc kubenswrapper[4792]: I0319 16:44:14.742409 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 19 16:44:14 crc kubenswrapper[4792]: I0319 16:44:14.743409 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.549074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.599174 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.600138 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-q29n4"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.600322 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.600930 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ms27t"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.601003 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.602230 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zwwzh"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.602515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.603001 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.606463 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.614834 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-28msx"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.615388 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.616583 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.616661 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.616791 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.616939 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.617994 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xkgg2"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.618020 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.622930 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.624370 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.625974 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.629546 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.629596 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.630377 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.630480 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nwdkb"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.630979 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.631016 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.631942 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.636061 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.636381 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.636565 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.636783 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.637234 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.637421 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.637560 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.637699 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.637741 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.637831 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.637937 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.637977 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.638061 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.638089 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.638216 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.638320 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639058 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639087 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639245 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639290 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639525 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639575 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639595 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639403 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.639768 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.640067 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.640198 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.640398 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.640503 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.640952 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.641765 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.646370 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fthfn"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.647076 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652548 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652601 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79259d19-3c66-4aa6-baa6-666ee50833b2-config\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652628 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x8zj\" (UniqueName: \"kubernetes.io/projected/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-kube-api-access-5x8zj\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652654 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-trusted-ca-bundle\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652677 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-config\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652705 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-audit-dir\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652729 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-serving-cert\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652756 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532c43ef-0391-4ff7-b26c-aeef9da10c5e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652828 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-service-ca\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652933 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-client-ca\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.652992 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-encryption-config\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653035 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-oauth-config\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653060 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-audit\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653081 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpnqx\" (UniqueName: \"kubernetes.io/projected/79259d19-3c66-4aa6-baa6-666ee50833b2-kube-api-access-jpnqx\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653109 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/532c43ef-0391-4ff7-b26c-aeef9da10c5e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653135 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-config\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-serving-cert\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653198 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-oauth-serving-cert\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653252 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8zv\" (UniqueName: \"kubernetes.io/projected/532c43ef-0391-4ff7-b26c-aeef9da10c5e-kube-api-access-fj8zv\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5696f5a2-e040-4aa0-818d-a390c8128171-serving-cert\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/79259d19-3c66-4aa6-baa6-666ee50833b2-images\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653342 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-etcd-serving-ca\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653557 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-etcd-client\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653599 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/79259d19-3c66-4aa6-baa6-666ee50833b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653654 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8gs4\" (UniqueName: \"kubernetes.io/projected/5696f5a2-e040-4aa0-818d-a390c8128171-kube-api-access-x8gs4\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653674 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-node-pullsecrets\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653713 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp7m9\" (UniqueName: \"kubernetes.io/projected/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-kube-api-access-rp7m9\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653758 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-image-import-ca\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653776 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.653790 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-config\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.668254 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.668398 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5q2cs"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.668966 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.682598 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.684425 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.684738 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.685077 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.685688 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.685985 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686024 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686261 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686323 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686396 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686465 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686480 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686645 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686715 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.686771 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.697299 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.697661 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.699133 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.699526 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.700238 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.685690 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.700600 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.705891 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.724611 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.724987 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.726876 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-6k44w"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.727207 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.734477 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.735832 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.738442 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.738512 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.738519 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.738583 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.739661 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.739831 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.740339 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.740707 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.741014 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.741442 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.742811 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.745219 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.745438 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.745905 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.746498 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.747024 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.747511 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2t8f8"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.748092 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.748328 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.748825 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.751548 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.751966 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.752373 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.752733 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.752806 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.753032 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.754647 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9qk59"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755036 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9qk59" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755495 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-etcd-client\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755522 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/79259d19-3c66-4aa6-baa6-666ee50833b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk768\" (UniqueName: \"kubernetes.io/projected/b34ab160-ed91-4173-9f6d-af8e4373087a-kube-api-access-xk768\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755565 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8gs4\" (UniqueName: \"kubernetes.io/projected/5696f5a2-e040-4aa0-818d-a390c8128171-kube-api-access-x8gs4\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755582 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-node-pullsecrets\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755601 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp7m9\" (UniqueName: \"kubernetes.io/projected/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-kube-api-access-rp7m9\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-image-import-ca\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755651 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-config\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755667 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34ab160-ed91-4173-9f6d-af8e4373087a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755682 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d1ad570-6354-44ba-802c-4860784bf053-service-ca-bundle\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79259d19-3c66-4aa6-baa6-666ee50833b2-config\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x8zj\" (UniqueName: \"kubernetes.io/projected/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-kube-api-access-5x8zj\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755769 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-trusted-ca-bundle\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755784 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-config\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34ab160-ed91-4173-9f6d-af8e4373087a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755821 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkxgq\" (UniqueName: \"kubernetes.io/projected/2d1ad570-6354-44ba-802c-4860784bf053-kube-api-access-tkxgq\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755864 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-audit-dir\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755895 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-stats-auth\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755913 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-metrics-certs\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755936 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532c43ef-0391-4ff7-b26c-aeef9da10c5e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-serving-cert\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.755983 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-service-ca\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756004 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4v4p\" (UniqueName: \"kubernetes.io/projected/c1c9f504-b92a-4bc2-95a4-c62610a18251-kube-api-access-c4v4p\") pod \"dns-operator-744455d44c-5q2cs\" (UID: \"c1c9f504-b92a-4bc2-95a4-c62610a18251\") " pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756028 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-client-ca\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756044 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-default-certificate\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756061 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-encryption-config\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756078 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-oauth-config\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-audit\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756112 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpnqx\" (UniqueName: \"kubernetes.io/projected/79259d19-3c66-4aa6-baa6-666ee50833b2-kube-api-access-jpnqx\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756129 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/532c43ef-0391-4ff7-b26c-aeef9da10c5e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756145 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-config\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756166 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-serving-cert\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756183 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-oauth-serving-cert\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756209 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1c9f504-b92a-4bc2-95a4-c62610a18251-metrics-tls\") pod \"dns-operator-744455d44c-5q2cs\" (UID: \"c1c9f504-b92a-4bc2-95a4-c62610a18251\") " pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8zv\" (UniqueName: \"kubernetes.io/projected/532c43ef-0391-4ff7-b26c-aeef9da10c5e-kube-api-access-fj8zv\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756245 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5696f5a2-e040-4aa0-818d-a390c8128171-serving-cert\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756261 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/79259d19-3c66-4aa6-baa6-666ee50833b2-images\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756279 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-etcd-serving-ca\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.756993 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-etcd-serving-ca\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.760443 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-audit\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.760647 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-oauth-serving-cert\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.761293 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532c43ef-0391-4ff7-b26c-aeef9da10c5e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.761982 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-node-pullsecrets\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.762251 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.766452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.763336 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-image-import-ca\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.765558 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-config\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.766638 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.778740 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.779761 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/79259d19-3c66-4aa6-baa6-666ee50833b2-images\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.780582 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-client-ca\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.781345 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-service-ca\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.782734 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-serving-cert\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.783365 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79259d19-3c66-4aa6-baa6-666ee50833b2-config\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.784874 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.785334 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.786401 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.786718 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.787220 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.787372 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.787509 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.787639 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.789065 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.762879 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-config\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.790287 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.791255 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-trusted-ca-bundle\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.791383 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.791591 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.791774 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-audit-dir\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.792749 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-config\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.792987 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5696f5a2-e040-4aa0-818d-a390c8128171-serving-cert\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.796231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-oauth-config\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.796250 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.796366 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.796471 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.796511 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.796525 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-encryption-config\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.796604 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.796624 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.797110 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.797667 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.797756 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/79259d19-3c66-4aa6-baa6-666ee50833b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.797758 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-etcd-client\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.797827 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.797942 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798006 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798099 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798186 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798203 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798222 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798208 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798329 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798399 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.798420 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.799009 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-serving-cert\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.799413 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/532c43ef-0391-4ff7-b26c-aeef9da10c5e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.807385 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.807387 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwjp"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.808427 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.809887 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.826090 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.826585 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pz5zs"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.827458 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.826605 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.828906 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.829611 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cfgxg"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.829981 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.830085 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.831493 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.832193 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.833850 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.834553 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.835546 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.836035 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.836351 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.838601 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.840164 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.840295 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.841069 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.842410 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.843144 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.843324 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.843931 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.844480 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.845632 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.846130 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.846467 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v5dbc"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.846994 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.850372 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565644-gg5p9"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.851063 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ms27t"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.851152 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.853344 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q29n4"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.855927 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zwwzh"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.856862 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-28msx"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.856965 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4v4p\" (UniqueName: \"kubernetes.io/projected/c1c9f504-b92a-4bc2-95a4-c62610a18251-kube-api-access-c4v4p\") pod \"dns-operator-744455d44c-5q2cs\" (UID: \"c1c9f504-b92a-4bc2-95a4-c62610a18251\") " pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.856999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-default-certificate\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.857036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1c9f504-b92a-4bc2-95a4-c62610a18251-metrics-tls\") pod \"dns-operator-744455d44c-5q2cs\" (UID: \"c1c9f504-b92a-4bc2-95a4-c62610a18251\") " pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.857085 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk768\" (UniqueName: \"kubernetes.io/projected/b34ab160-ed91-4173-9f6d-af8e4373087a-kube-api-access-xk768\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.857122 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d1ad570-6354-44ba-802c-4860784bf053-service-ca-bundle\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.857139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34ab160-ed91-4173-9f6d-af8e4373087a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.857172 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkxgq\" (UniqueName: \"kubernetes.io/projected/2d1ad570-6354-44ba-802c-4860784bf053-kube-api-access-tkxgq\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.857200 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34ab160-ed91-4173-9f6d-af8e4373087a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.857222 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-stats-auth\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.857238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-metrics-certs\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.858770 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34ab160-ed91-4173-9f6d-af8e4373087a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.859046 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.859380 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d1ad570-6354-44ba-802c-4860784bf053-service-ca-bundle\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.861443 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34ab160-ed91-4173-9f6d-af8e4373087a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.861634 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1c9f504-b92a-4bc2-95a4-c62610a18251-metrics-tls\") pod \"dns-operator-744455d44c-5q2cs\" (UID: \"c1c9f504-b92a-4bc2-95a4-c62610a18251\") " pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.862033 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-metrics-certs\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.862162 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-stats-auth\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.862868 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.869054 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.869186 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d1ad570-6354-44ba-802c-4860784bf053-default-certificate\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.869486 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xkgg2"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.871792 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.872990 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.873965 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fthfn"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.875491 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bxb9l"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.876417 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bxb9l" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.876739 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.877992 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dzdmn"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.879559 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.880061 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.881609 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.883008 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pz5zs"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.883058 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.883971 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.884992 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.886044 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nwdkb"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.888919 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2t8f8"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.890412 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ckppm"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.891388 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.892271 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.900246 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bw2ct"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.906699 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5q2cs"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.906733 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.907116 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.909287 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.914200 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.915502 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.917947 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.919967 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.921155 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.922546 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.922714 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9qk59"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.923743 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.924817 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cfgxg"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.925875 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwjp"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.927202 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.928188 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.929480 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.930442 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.931779 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v5dbc"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.932463 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bw2ct"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.933466 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bxb9l"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.934541 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.935554 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565644-gg5p9"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.936726 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dzdmn"] Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.942784 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.963188 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 19 16:44:18 crc kubenswrapper[4792]: I0319 16:44:18.982303 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.002960 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.023040 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.055143 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.062492 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.082698 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.103170 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.123482 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.144614 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.164478 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.184953 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.204677 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.222676 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.242357 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.263497 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.283386 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.303132 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.323907 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.344264 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.362973 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.382974 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.422669 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpnqx\" (UniqueName: \"kubernetes.io/projected/79259d19-3c66-4aa6-baa6-666ee50833b2-kube-api-access-jpnqx\") pod \"machine-api-operator-5694c8668f-28msx\" (UID: \"79259d19-3c66-4aa6-baa6-666ee50833b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.438123 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8gs4\" (UniqueName: \"kubernetes.io/projected/5696f5a2-e040-4aa0-818d-a390c8128171-kube-api-access-x8gs4\") pod \"controller-manager-879f6c89f-zwwzh\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.458075 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8zv\" (UniqueName: \"kubernetes.io/projected/532c43ef-0391-4ff7-b26c-aeef9da10c5e-kube-api-access-fj8zv\") pod \"openshift-apiserver-operator-796bbdcf4f-7kvbs\" (UID: \"532c43ef-0391-4ff7-b26c-aeef9da10c5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.477459 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp7m9\" (UniqueName: \"kubernetes.io/projected/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-kube-api-access-rp7m9\") pod \"console-f9d7485db-q29n4\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.484130 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.503472 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.523319 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.536604 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.560349 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.567520 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x8zj\" (UniqueName: \"kubernetes.io/projected/2a14e97e-dd33-47d5-8c93-2cd1747a0ba7-kube-api-access-5x8zj\") pod \"apiserver-76f77b778f-ms27t\" (UID: \"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7\") " pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.600646 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.611676 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.624732 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.626734 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.643740 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.648745 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.665257 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.665991 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.666276 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-certificates\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.666336 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8eb2662-5241-48e2-9a13-20e0635514ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.666416 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: E0319 16:44:19.666776 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:46:21.666739854 +0000 UTC m=+344.812797394 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.666851 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c43d7a6a-8816-4471-92f5-32dc458c677f-serving-cert\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.666911 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-trusted-ca\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.666936 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95mnv\" (UniqueName: \"kubernetes.io/projected/b8eb2662-5241-48e2-9a13-20e0635514ae-kube-api-access-95mnv\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.666965 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-client\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.666984 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36f74c49-94ef-404a-aeab-c3ef752df373-serving-cert\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667025 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdzdx\" (UniqueName: \"kubernetes.io/projected/c43d7a6a-8816-4471-92f5-32dc458c677f-kube-api-access-tdzdx\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667041 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-config\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667072 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c43d7a6a-8816-4471-92f5-32dc458c677f-config\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667088 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-ca\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667118 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-service-ca\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667141 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43d7a6a-8816-4471-92f5-32dc458c677f-trusted-ca\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667159 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8eb2662-5241-48e2-9a13-20e0635514ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667183 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667219 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tbl\" (UniqueName: \"kubernetes.io/projected/36f74c49-94ef-404a-aeab-c3ef752df373-kube-api-access-q7tbl\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8eb2662-5241-48e2-9a13-20e0635514ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667273 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mht5\" (UniqueName: \"kubernetes.io/projected/356468d1-7817-4566-bb80-ca21f4b9ff24-kube-api-access-9mht5\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667311 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-tls\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667328 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-bound-sa-token\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667344 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667376 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/356468d1-7817-4566-bb80-ca21f4b9ff24-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667390 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/356468d1-7817-4566-bb80-ca21f4b9ff24-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.667413 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmdx4\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-kube-api-access-fmdx4\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: E0319 16:44:19.669748 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.169716615 +0000 UTC m=+223.315774315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.672863 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.672879 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.675769 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.684760 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.704597 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.724176 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.745921 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.763261 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768429 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768562 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-registration-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768600 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zmhx\" (UniqueName: \"kubernetes.io/projected/e7c8fc86-569f-425e-bb93-e75a206f1e68-kube-api-access-9zmhx\") pod \"control-plane-machine-set-operator-78cbb6b69f-r6754\" (UID: \"e7c8fc86-569f-425e-bb93-e75a206f1e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc7gh\" (UniqueName: \"kubernetes.io/projected/6adf0a51-8344-4d3e-906b-423278cf06b7-kube-api-access-fc7gh\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768645 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-tls\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768661 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-bound-sa-token\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768685 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0850c733-a734-4c4b-9952-42b30f77822f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768700 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768718 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/356468d1-7817-4566-bb80-ca21f4b9ff24-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768736 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/70747030-a75c-4fef-840e-d79555471540-node-bootstrap-token\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768749 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0850c733-a734-4c4b-9952-42b30f77822f-config\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768765 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-client-ca\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768784 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-certificates\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768830 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c43d7a6a-8816-4471-92f5-32dc458c677f-serving-cert\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768931 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768949 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-config\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768966 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.768986 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a446d1fe-6ebb-425a-8b70-b3225da28873-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9q2vd\" (UID: \"a446d1fe-6ebb-425a-8b70-b3225da28873\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769031 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79e6ba0f-7a19-4676-af04-8cbcc56ab4fa-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2ccxc\" (UID: \"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769050 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769071 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769099 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdzdx\" (UniqueName: \"kubernetes.io/projected/c43d7a6a-8816-4471-92f5-32dc458c677f-kube-api-access-tdzdx\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769122 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrzp\" (UniqueName: \"kubernetes.io/projected/9a45e861-132e-4e80-8bf5-f48c43844b99-kube-api-access-srrzp\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769140 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6470e583-2fed-4638-a5b3-3213db4f4b84-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769155 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6430b947-6329-4e68-9cb4-6e08ee058f70-serving-cert\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gph\" (UniqueName: \"kubernetes.io/projected/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-kube-api-access-d6gph\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769191 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-config\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769222 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-policies\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769237 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d09edb3-848f-4a5d-bccf-4122850cb7bb-webhook-cert\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769252 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwl5m\" (UniqueName: \"kubernetes.io/projected/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-kube-api-access-dwl5m\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769266 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6adf0a51-8344-4d3e-906b-423278cf06b7-metrics-tls\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769286 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769303 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-srv-cert\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-service-ca-bundle\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769345 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2n44\" (UniqueName: \"kubernetes.io/projected/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-kube-api-access-x2n44\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769359 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhpl\" (UniqueName: \"kubernetes.io/projected/89bffca4-d37a-4bf9-a958-f1a3c9f413e0-kube-api-access-wmhpl\") pod \"migrator-59844c95c7-sktld\" (UID: \"89bffca4-d37a-4bf9-a958-f1a3c9f413e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nqg4\" (UniqueName: \"kubernetes.io/projected/b749c00a-6a69-4782-8018-7e6f759c9575-kube-api-access-5nqg4\") pod \"downloads-7954f5f757-9qk59\" (UID: \"b749c00a-6a69-4782-8018-7e6f759c9575\") " pod="openshift-console/downloads-7954f5f757-9qk59" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769390 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-config\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-socket-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769443 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-mountpoint-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769460 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9jqm\" (UniqueName: \"kubernetes.io/projected/f75bebd8-969f-4e62-81a3-4ff5e456ce28-kube-api-access-k9jqm\") pod \"ingress-canary-bxb9l\" (UID: \"f75bebd8-969f-4e62-81a3-4ff5e456ce28\") " pod="openshift-ingress-canary/ingress-canary-bxb9l" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769479 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-serving-cert\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769493 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-plugins-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769507 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prh96\" (UniqueName: \"kubernetes.io/projected/70747030-a75c-4fef-840e-d79555471540-kube-api-access-prh96\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769522 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92hq\" (UniqueName: \"kubernetes.io/projected/a446d1fe-6ebb-425a-8b70-b3225da28873-kube-api-access-n92hq\") pod \"package-server-manager-789f6589d5-9q2vd\" (UID: \"a446d1fe-6ebb-425a-8b70-b3225da28873\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769536 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769550 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769574 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eea1483-0c0b-46af-94a0-856a9a25128c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769588 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adf0a51-8344-4d3e-906b-423278cf06b7-config-volume\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769609 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7tbl\" (UniqueName: \"kubernetes.io/projected/36f74c49-94ef-404a-aeab-c3ef752df373-kube-api-access-q7tbl\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769658 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecfa468d-32df-43ac-8884-40aad47fd099-config-volume\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769673 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769688 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-audit-dir\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8eb2662-5241-48e2-9a13-20e0635514ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769718 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68729\" (UniqueName: \"kubernetes.io/projected/ecfa468d-32df-43ac-8884-40aad47fd099-kube-api-access-68729\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769736 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769755 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mht5\" (UniqueName: \"kubernetes.io/projected/356468d1-7817-4566-bb80-ca21f4b9ff24-kube-api-access-9mht5\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769770 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a45e861-132e-4e80-8bf5-f48c43844b99-serving-cert\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769786 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7c8fc86-569f-425e-bb93-e75a206f1e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r6754\" (UID: \"e7c8fc86-569f-425e-bb93-e75a206f1e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769801 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-machine-approver-tls\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769816 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2d09edb3-848f-4a5d-bccf-4122850cb7bb-tmpfs\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769830 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cz4p\" (UniqueName: \"kubernetes.io/projected/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-kube-api-access-7cz4p\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769859 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/356468d1-7817-4566-bb80-ca21f4b9ff24-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwb8s\" (UniqueName: \"kubernetes.io/projected/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-kube-api-access-fwb8s\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769907 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5fea090-6dce-44d1-b5bc-9dabbfa00286-proxy-tls\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eea1483-0c0b-46af-94a0-856a9a25128c-config\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769937 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmdx4\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-kube-api-access-fmdx4\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769953 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75bebd8-969f-4e62-81a3-4ff5e456ce28-cert\") pod \"ingress-canary-bxb9l\" (UID: \"f75bebd8-969f-4e62-81a3-4ff5e456ce28\") " pod="openshift-ingress-canary/ingress-canary-bxb9l" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769968 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-profile-collector-cert\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769983 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.769999 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770017 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8eb2662-5241-48e2-9a13-20e0635514ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770034 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8j2\" (UniqueName: \"kubernetes.io/projected/2d09edb3-848f-4a5d-bccf-4122850cb7bb-kube-api-access-dh8j2\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770051 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770086 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np25d\" (UniqueName: \"kubernetes.io/projected/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-kube-api-access-np25d\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770101 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6470e583-2fed-4638-a5b3-3213db4f4b84-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770116 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-trusted-ca\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770133 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770148 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ngw8\" (UniqueName: \"kubernetes.io/projected/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-kube-api-access-8ngw8\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770167 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95mnv\" (UniqueName: \"kubernetes.io/projected/b8eb2662-5241-48e2-9a13-20e0635514ae-kube-api-access-95mnv\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770183 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cxr\" (UniqueName: \"kubernetes.io/projected/6430b947-6329-4e68-9cb4-6e08ee058f70-kube-api-access-h5cxr\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770198 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-etcd-client\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770212 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d09edb3-848f-4a5d-bccf-4122850cb7bb-apiservice-cert\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770229 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-client\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770246 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzd5w\" (UniqueName: \"kubernetes.io/projected/422112a2-a6c2-4d09-aaeb-e4f9924ed96e-kube-api-access-wzd5w\") pod \"auto-csr-approver-29565644-gg5p9\" (UID: \"422112a2-a6c2-4d09-aaeb-e4f9924ed96e\") " pod="openshift-infra/auto-csr-approver-29565644-gg5p9" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770262 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0850c733-a734-4c4b-9952-42b30f77822f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770278 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770293 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770326 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-csi-data-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770341 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770361 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36f74c49-94ef-404a-aeab-c3ef752df373-serving-cert\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-config\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770407 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770423 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5fea090-6dce-44d1-b5bc-9dabbfa00286-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770440 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c43d7a6a-8816-4471-92f5-32dc458c677f-config\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770456 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-ca\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770470 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecfa468d-32df-43ac-8884-40aad47fd099-secret-volume\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5mrv\" (UniqueName: \"kubernetes.io/projected/8caec22c-6b2b-4b86-904d-a7954633e59d-kube-api-access-j5mrv\") pod \"multus-admission-controller-857f4d67dd-pz5zs\" (UID: \"8caec22c-6b2b-4b86-904d-a7954633e59d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770578 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ncj\" (UniqueName: \"kubernetes.io/projected/6470e583-2fed-4638-a5b3-3213db4f4b84-kube-api-access-59ncj\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770593 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7llr7\" (UniqueName: \"kubernetes.io/projected/317303db-f645-48f1-80f5-23e798ffd8f0-kube-api-access-7llr7\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770610 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770624 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/990ccb69-2ef3-40de-a969-a985d3a60a04-signing-cabundle\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770642 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a45e861-132e-4e80-8bf5-f48c43844b99-config\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770657 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gtf7\" (UniqueName: \"kubernetes.io/projected/79e6ba0f-7a19-4676-af04-8cbcc56ab4fa-kube-api-access-2gtf7\") pod \"cluster-samples-operator-665b6dd947-2ccxc\" (UID: \"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770673 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-dir\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770688 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwp7\" (UniqueName: \"kubernetes.io/projected/e5fea090-6dce-44d1-b5bc-9dabbfa00286-kube-api-access-6hwp7\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770704 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eea1483-0c0b-46af-94a0-856a9a25128c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770720 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-service-ca\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770736 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8caec22c-6b2b-4b86-904d-a7954633e59d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pz5zs\" (UID: \"8caec22c-6b2b-4b86-904d-a7954633e59d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jh8\" (UniqueName: \"kubernetes.io/projected/990ccb69-2ef3-40de-a969-a985d3a60a04-kube-api-access-c7jh8\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770768 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-srv-cert\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770784 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-images\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770799 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43d7a6a-8816-4471-92f5-32dc458c677f-trusted-ca\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8eb2662-5241-48e2-9a13-20e0635514ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770833 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-encryption-config\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770861 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j4sq\" (UniqueName: \"kubernetes.io/projected/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-kube-api-access-2j4sq\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770877 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-auth-proxy-config\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770893 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/990ccb69-2ef3-40de-a969-a985d3a60a04-signing-key\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770911 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770930 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770947 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770965 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-audit-policies\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770982 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shk9j\" (UniqueName: \"kubernetes.io/projected/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-kube-api-access-shk9j\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.770996 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/70747030-a75c-4fef-840e-d79555471540-certs\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.771011 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-proxy-tls\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: E0319 16:44:19.771337 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.271308407 +0000 UTC m=+223.417365947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.772271 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.773143 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/356468d1-7817-4566-bb80-ca21f4b9ff24-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.774453 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-certificates\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.775290 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/356468d1-7817-4566-bb80-ca21f4b9ff24-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.775756 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-tls\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.778141 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.779267 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8eb2662-5241-48e2-9a13-20e0635514ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.779319 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-trusted-ca\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.780020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-service-ca\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.780429 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.780776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43d7a6a-8816-4471-92f5-32dc458c677f-trusted-ca\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.782755 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-client\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.784114 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8eb2662-5241-48e2-9a13-20e0635514ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.784507 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-config\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.785017 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/36f74c49-94ef-404a-aeab-c3ef752df373-etcd-ca\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.785288 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c43d7a6a-8816-4471-92f5-32dc458c677f-serving-cert\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.785734 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c43d7a6a-8816-4471-92f5-32dc458c677f-config\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.788406 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.794729 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36f74c49-94ef-404a-aeab-c3ef752df373-serving-cert\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.801443 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.802658 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.804759 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.823944 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.845078 4792 request.go:700] Waited for 1.014791891s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.846483 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.862697 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.865238 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874545 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7c8fc86-569f-425e-bb93-e75a206f1e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r6754\" (UID: \"e7c8fc86-569f-425e-bb93-e75a206f1e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874599 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-machine-approver-tls\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874633 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2d09edb3-848f-4a5d-bccf-4122850cb7bb-tmpfs\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874666 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a45e861-132e-4e80-8bf5-f48c43844b99-serving-cert\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874688 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cz4p\" (UniqueName: \"kubernetes.io/projected/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-kube-api-access-7cz4p\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874711 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874732 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5fea090-6dce-44d1-b5bc-9dabbfa00286-proxy-tls\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874754 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eea1483-0c0b-46af-94a0-856a9a25128c-config\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874775 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwb8s\" (UniqueName: \"kubernetes.io/projected/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-kube-api-access-fwb8s\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874797 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-profile-collector-cert\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874823 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75bebd8-969f-4e62-81a3-4ff5e456ce28-cert\") pod \"ingress-canary-bxb9l\" (UID: \"f75bebd8-969f-4e62-81a3-4ff5e456ce28\") " pod="openshift-ingress-canary/ingress-canary-bxb9l" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874869 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874893 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874921 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8j2\" (UniqueName: \"kubernetes.io/projected/2d09edb3-848f-4a5d-bccf-4122850cb7bb-kube-api-access-dh8j2\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874944 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np25d\" (UniqueName: \"kubernetes.io/projected/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-kube-api-access-np25d\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874967 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6470e583-2fed-4638-a5b3-3213db4f4b84-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.874989 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875011 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ngw8\" (UniqueName: \"kubernetes.io/projected/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-kube-api-access-8ngw8\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875041 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5cxr\" (UniqueName: \"kubernetes.io/projected/6430b947-6329-4e68-9cb4-6e08ee058f70-kube-api-access-h5cxr\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875064 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-etcd-client\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875087 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d09edb3-848f-4a5d-bccf-4122850cb7bb-apiservice-cert\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875114 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzd5w\" (UniqueName: \"kubernetes.io/projected/422112a2-a6c2-4d09-aaeb-e4f9924ed96e-kube-api-access-wzd5w\") pod \"auto-csr-approver-29565644-gg5p9\" (UID: \"422112a2-a6c2-4d09-aaeb-e4f9924ed96e\") " pod="openshift-infra/auto-csr-approver-29565644-gg5p9" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875136 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875158 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0850c733-a734-4c4b-9952-42b30f77822f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875200 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-csi-data-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875221 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875251 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-config\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875274 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875298 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5fea090-6dce-44d1-b5bc-9dabbfa00286-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875323 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecfa468d-32df-43ac-8884-40aad47fd099-secret-volume\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875346 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5mrv\" (UniqueName: \"kubernetes.io/projected/8caec22c-6b2b-4b86-904d-a7954633e59d-kube-api-access-j5mrv\") pod \"multus-admission-controller-857f4d67dd-pz5zs\" (UID: \"8caec22c-6b2b-4b86-904d-a7954633e59d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875371 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ncj\" (UniqueName: \"kubernetes.io/projected/6470e583-2fed-4638-a5b3-3213db4f4b84-kube-api-access-59ncj\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875399 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7llr7\" (UniqueName: \"kubernetes.io/projected/317303db-f645-48f1-80f5-23e798ffd8f0-kube-api-access-7llr7\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875427 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875450 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/990ccb69-2ef3-40de-a969-a985d3a60a04-signing-cabundle\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875472 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a45e861-132e-4e80-8bf5-f48c43844b99-config\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875496 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gtf7\" (UniqueName: \"kubernetes.io/projected/79e6ba0f-7a19-4676-af04-8cbcc56ab4fa-kube-api-access-2gtf7\") pod \"cluster-samples-operator-665b6dd947-2ccxc\" (UID: \"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875520 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-dir\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hwp7\" (UniqueName: \"kubernetes.io/projected/e5fea090-6dce-44d1-b5bc-9dabbfa00286-kube-api-access-6hwp7\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875566 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eea1483-0c0b-46af-94a0-856a9a25128c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875590 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8caec22c-6b2b-4b86-904d-a7954633e59d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pz5zs\" (UID: \"8caec22c-6b2b-4b86-904d-a7954633e59d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875612 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7jh8\" (UniqueName: \"kubernetes.io/projected/990ccb69-2ef3-40de-a969-a985d3a60a04-kube-api-access-c7jh8\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875640 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-images\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875663 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-srv-cert\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875684 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-encryption-config\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875709 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j4sq\" (UniqueName: \"kubernetes.io/projected/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-kube-api-access-2j4sq\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-auth-proxy-config\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/990ccb69-2ef3-40de-a969-a985d3a60a04-signing-key\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875859 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875883 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-audit-policies\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875908 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shk9j\" (UniqueName: \"kubernetes.io/projected/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-kube-api-access-shk9j\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-proxy-tls\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875954 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/70747030-a75c-4fef-840e-d79555471540-certs\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.875979 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876003 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-registration-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876026 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zmhx\" (UniqueName: \"kubernetes.io/projected/e7c8fc86-569f-425e-bb93-e75a206f1e68-kube-api-access-9zmhx\") pod \"control-plane-machine-set-operator-78cbb6b69f-r6754\" (UID: \"e7c8fc86-569f-425e-bb93-e75a206f1e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876050 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc7gh\" (UniqueName: \"kubernetes.io/projected/6adf0a51-8344-4d3e-906b-423278cf06b7-kube-api-access-fc7gh\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876108 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q29n4"] Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876564 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876611 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0850c733-a734-4c4b-9952-42b30f77822f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876619 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs"] Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876672 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/70747030-a75c-4fef-840e-d79555471540-node-bootstrap-token\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876697 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0850c733-a734-4c4b-9952-42b30f77822f-config\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876720 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-client-ca\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876751 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876802 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-config\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876824 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876865 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79e6ba0f-7a19-4676-af04-8cbcc56ab4fa-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2ccxc\" (UID: \"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876888 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a446d1fe-6ebb-425a-8b70-b3225da28873-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9q2vd\" (UID: \"a446d1fe-6ebb-425a-8b70-b3225da28873\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876916 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876949 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.876982 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrzp\" (UniqueName: \"kubernetes.io/projected/9a45e861-132e-4e80-8bf5-f48c43844b99-kube-api-access-srrzp\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877006 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6470e583-2fed-4638-a5b3-3213db4f4b84-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877028 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6430b947-6329-4e68-9cb4-6e08ee058f70-serving-cert\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877052 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6gph\" (UniqueName: \"kubernetes.io/projected/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-kube-api-access-d6gph\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877075 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d09edb3-848f-4a5d-bccf-4122850cb7bb-webhook-cert\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877116 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-policies\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877140 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwl5m\" (UniqueName: \"kubernetes.io/projected/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-kube-api-access-dwl5m\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6adf0a51-8344-4d3e-906b-423278cf06b7-metrics-tls\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877186 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877208 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-srv-cert\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877234 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-service-ca-bundle\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877254 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2n44\" (UniqueName: \"kubernetes.io/projected/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-kube-api-access-x2n44\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877278 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhpl\" (UniqueName: \"kubernetes.io/projected/89bffca4-d37a-4bf9-a958-f1a3c9f413e0-kube-api-access-wmhpl\") pod \"migrator-59844c95c7-sktld\" (UID: \"89bffca4-d37a-4bf9-a958-f1a3c9f413e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877303 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nqg4\" (UniqueName: \"kubernetes.io/projected/b749c00a-6a69-4782-8018-7e6f759c9575-kube-api-access-5nqg4\") pod \"downloads-7954f5f757-9qk59\" (UID: \"b749c00a-6a69-4782-8018-7e6f759c9575\") " pod="openshift-console/downloads-7954f5f757-9qk59" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877325 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-socket-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877347 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-mountpoint-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877370 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-config\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877391 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9jqm\" (UniqueName: \"kubernetes.io/projected/f75bebd8-969f-4e62-81a3-4ff5e456ce28-kube-api-access-k9jqm\") pod \"ingress-canary-bxb9l\" (UID: \"f75bebd8-969f-4e62-81a3-4ff5e456ce28\") " pod="openshift-ingress-canary/ingress-canary-bxb9l" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877413 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-serving-cert\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877435 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-plugins-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877458 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prh96\" (UniqueName: \"kubernetes.io/projected/70747030-a75c-4fef-840e-d79555471540-kube-api-access-prh96\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877545 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92hq\" (UniqueName: \"kubernetes.io/projected/a446d1fe-6ebb-425a-8b70-b3225da28873-kube-api-access-n92hq\") pod \"package-server-manager-789f6589d5-9q2vd\" (UID: \"a446d1fe-6ebb-425a-8b70-b3225da28873\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877571 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877591 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adf0a51-8344-4d3e-906b-423278cf06b7-config-volume\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877612 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eea1483-0c0b-46af-94a0-856a9a25128c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877628 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-audit-dir\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877670 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecfa468d-32df-43ac-8884-40aad47fd099-config-volume\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877718 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68729\" (UniqueName: \"kubernetes.io/projected/ecfa468d-32df-43ac-8884-40aad47fd099-kube-api-access-68729\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.877744 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.879413 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7c8fc86-569f-425e-bb93-e75a206f1e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r6754\" (UID: \"e7c8fc86-569f-425e-bb93-e75a206f1e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.879466 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eea1483-0c0b-46af-94a0-856a9a25128c-config\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.880354 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.880785 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.880796 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.881685 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2d09edb3-848f-4a5d-bccf-4122850cb7bb-tmpfs\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.882016 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-csi-data-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.882598 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-etcd-client\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.884011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.884176 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.884282 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.884340 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-dir\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.884438 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-socket-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.884497 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-mountpoint-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.884877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-machine-approver-tls\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.885079 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-config\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.885756 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.885919 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-registration-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.886246 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-config\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: E0319 16:44:19.886565 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.386548603 +0000 UTC m=+223.532606333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.888340 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-policies\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.888546 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-plugins-dir\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.889901 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-auth-proxy-config\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.890238 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.891327 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.894678 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.895770 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.899999 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-audit-policies\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.900272 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6430b947-6329-4e68-9cb4-6e08ee058f70-service-ca-bundle\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.900490 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.900903 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-audit-dir\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.901737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0850c733-a734-4c4b-9952-42b30f77822f-config\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.901967 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.902539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.902694 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eea1483-0c0b-46af-94a0-856a9a25128c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.902824 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.902921 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-serving-cert\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.904176 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.907529 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.908146 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79e6ba0f-7a19-4676-af04-8cbcc56ab4fa-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2ccxc\" (UID: \"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.909575 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.911421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.912425 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.913347 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-encryption-config\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.913474 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.913556 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8caec22c-6b2b-4b86-904d-a7954633e59d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pz5zs\" (UID: \"8caec22c-6b2b-4b86-904d-a7954633e59d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.913712 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0850c733-a734-4c4b-9952-42b30f77822f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.913928 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.917170 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6430b947-6329-4e68-9cb4-6e08ee058f70-serving-cert\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.923083 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-client-ca\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.923206 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-config\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.923345 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:19 crc kubenswrapper[4792]: W0319 16:44:19.925014 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod532c43ef_0391_4ff7_b26c_aeef9da10c5e.slice/crio-ca864485b4d225e72017e7955d8b6688a28b7b1ca4bee21079e101e22cfe9ceb WatchSource:0}: Error finding container ca864485b4d225e72017e7955d8b6688a28b7b1ca4bee21079e101e22cfe9ceb: Status 404 returned error can't find the container with id ca864485b4d225e72017e7955d8b6688a28b7b1ca4bee21079e101e22cfe9ceb Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.925282 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.926338 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5fea090-6dce-44d1-b5bc-9dabbfa00286-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.926374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.940760 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ms27t"] Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.941786 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5fea090-6dce-44d1-b5bc-9dabbfa00286-proxy-tls\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.944135 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.968478 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.978461 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.979322 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d09edb3-848f-4a5d-bccf-4122850cb7bb-apiservice-cert\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: E0319 16:44:19.979434 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.479414836 +0000 UTC m=+223.625472366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.979580 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:19 crc kubenswrapper[4792]: E0319 16:44:19.979988 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.479980831 +0000 UTC m=+223.626038371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.982613 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d09edb3-848f-4a5d-bccf-4122850cb7bb-webhook-cert\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:19 crc kubenswrapper[4792]: I0319 16:44:19.983961 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.003452 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.023120 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.044625 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.056191 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a45e861-132e-4e80-8bf5-f48c43844b99-serving-cert\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.064785 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.080400 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.080570 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.580544425 +0000 UTC m=+223.726601965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.081192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.081527 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.581519762 +0000 UTC m=+223.727577302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.083828 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.085565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a45e861-132e-4e80-8bf5-f48c43844b99-config\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:20 crc kubenswrapper[4792]: W0319 16:44:20.104339 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a14e97e_dd33_47d5_8c93_2cd1747a0ba7.slice/crio-a69ecff3dc812d1947c3c5201dc290be0b5dad92586d749bf1b234aa88a86c4e WatchSource:0}: Error finding container a69ecff3dc812d1947c3c5201dc290be0b5dad92586d749bf1b234aa88a86c4e: Status 404 returned error can't find the container with id a69ecff3dc812d1947c3c5201dc290be0b5dad92586d749bf1b234aa88a86c4e Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.104387 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: W0319 16:44:20.105434 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-3573c224d1f7e8c798584e0382b4bc18fdb845592a42f938969d171199b55a3d WatchSource:0}: Error finding container 3573c224d1f7e8c798584e0382b4bc18fdb845592a42f938969d171199b55a3d: Status 404 returned error can't find the container with id 3573c224d1f7e8c798584e0382b4bc18fdb845592a42f938969d171199b55a3d Mar 19 16:44:20 crc kubenswrapper[4792]: W0319 16:44:20.106438 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-730728a917f04d1b1be0f80f02643fe7d2dfa36621362b05c1dfe38a7f38278a WatchSource:0}: Error finding container 730728a917f04d1b1be0f80f02643fe7d2dfa36621362b05c1dfe38a7f38278a: Status 404 returned error can't find the container with id 730728a917f04d1b1be0f80f02643fe7d2dfa36621362b05c1dfe38a7f38278a Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.124201 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.141569 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zwwzh"] Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.142761 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.149996 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-28msx"] Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.151143 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a446d1fe-6ebb-425a-8b70-b3225da28873-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9q2vd\" (UID: \"a446d1fe-6ebb-425a-8b70-b3225da28873\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.163396 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.168510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.172950 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-profile-collector-cert\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.176005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecfa468d-32df-43ac-8884-40aad47fd099-secret-volume\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.182173 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.182400 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.682364044 +0000 UTC m=+223.828421604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.182677 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.183329 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.68331776 +0000 UTC m=+223.829375310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.184332 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.189971 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-srv-cert\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.204373 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.224754 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.226210 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-images\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.228860 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-proxy-tls\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.231122 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.231170 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.243218 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.263219 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.274326 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/990ccb69-2ef3-40de-a969-a985d3a60a04-signing-key\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.283735 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.283923 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.284364 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.784330796 +0000 UTC m=+223.930388356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.284865 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.285369 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.785356703 +0000 UTC m=+223.931414243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: W0319 16:44:20.290220 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-208b0b1ec6274e01ca2f26ac01a6899823104c54749b03e63c77822d0bd0442e WatchSource:0}: Error finding container 208b0b1ec6274e01ca2f26ac01a6899823104c54749b03e63c77822d0bd0442e: Status 404 returned error can't find the container with id 208b0b1ec6274e01ca2f26ac01a6899823104c54749b03e63c77822d0bd0442e Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.299181 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-srv-cert\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.303124 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.324389 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.343811 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.354317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6470e583-2fed-4638-a5b3-3213db4f4b84-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.363612 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.384114 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.385886 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.386013 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.885994349 +0000 UTC m=+224.032051889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.386391 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.386986 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.886978347 +0000 UTC m=+224.033035887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.403964 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.406327 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/990ccb69-2ef3-40de-a969-a985d3a60a04-signing-cabundle\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.423245 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.443819 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.447889 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6470e583-2fed-4638-a5b3-3213db4f4b84-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.463797 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.484611 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.488992 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.489200 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.989172535 +0000 UTC m=+224.135230065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.490735 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.491322 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:20.991303504 +0000 UTC m=+224.137361044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.505134 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.513070 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecfa468d-32df-43ac-8884-40aad47fd099-config-volume\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.523173 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.543397 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.581783 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk768\" (UniqueName: \"kubernetes.io/projected/b34ab160-ed91-4173-9f6d-af8e4373087a-kube-api-access-xk768\") pod \"openshift-controller-manager-operator-756b6f6bc6-md9c2\" (UID: \"b34ab160-ed91-4173-9f6d-af8e4373087a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.592209 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.592461 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.092427013 +0000 UTC m=+224.238484633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.592554 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.593033 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.093015059 +0000 UTC m=+224.239072619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.597768 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4v4p\" (UniqueName: \"kubernetes.io/projected/c1c9f504-b92a-4bc2-95a4-c62610a18251-kube-api-access-c4v4p\") pod \"dns-operator-744455d44c-5q2cs\" (UID: \"c1c9f504-b92a-4bc2-95a4-c62610a18251\") " pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.616674 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkxgq\" (UniqueName: \"kubernetes.io/projected/2d1ad570-6354-44ba-802c-4860784bf053-kube-api-access-tkxgq\") pod \"router-default-5444994796-6k44w\" (UID: \"2d1ad570-6354-44ba-802c-4860784bf053\") " pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.622725 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.633205 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75bebd8-969f-4e62-81a3-4ff5e456ce28-cert\") pod \"ingress-canary-bxb9l\" (UID: \"f75bebd8-969f-4e62-81a3-4ff5e456ce28\") " pod="openshift-ingress-canary/ingress-canary-bxb9l" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.637194 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.643675 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.663310 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.675580 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.676917 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.683353 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.693820 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.694323 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.194280133 +0000 UTC m=+224.340337673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.698779 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.704971 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.723874 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 16:44:20 crc kubenswrapper[4792]: W0319 16:44:20.730921 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d1ad570_6354_44ba_802c_4860784bf053.slice/crio-5afaaa4b62a7048adcd3ff71e8c53cb5ce95a4773415da266b30ead37cdad47a WatchSource:0}: Error finding container 5afaaa4b62a7048adcd3ff71e8c53cb5ce95a4773415da266b30ead37cdad47a: Status 404 returned error can't find the container with id 5afaaa4b62a7048adcd3ff71e8c53cb5ce95a4773415da266b30ead37cdad47a Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.734881 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adf0a51-8344-4d3e-906b-423278cf06b7-config-volume\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.744879 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.751398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6adf0a51-8344-4d3e-906b-423278cf06b7-metrics-tls\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.763861 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.783913 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.796136 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.796959 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.296943194 +0000 UTC m=+224.443000844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.797489 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/70747030-a75c-4fef-840e-d79555471540-node-bootstrap-token\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.803127 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.817124 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/70747030-a75c-4fef-840e-d79555471540-certs\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.826219 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2"] Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.826288 4792 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.845806 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.863327 4792 request.go:700] Waited for 1.955809281s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.868183 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.897220 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.897433 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q29n4" event={"ID":"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809","Type":"ContainerStarted","Data":"ca95f3548e51b4e3b3d0fba0b9feb54aa9a767208d65bd1bab93684ae25543d0"} Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.897459 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.397426896 +0000 UTC m=+224.543484436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.897484 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q29n4" event={"ID":"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809","Type":"ContainerStarted","Data":"8169800fd78f53913ae640b84357f02c39481da1393cbca3a0f6ad1fe7ff20c1"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.897752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: E0319 16:44:20.898174 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.398156445 +0000 UTC m=+224.544214195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.898925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"815fc599b44dd4582d4bd34171d759437fd55c2f3d0b4bfce440551fa34acf83"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.898951 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3573c224d1f7e8c798584e0382b4bc18fdb845592a42f938969d171199b55a3d"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.899252 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.907236 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" event={"ID":"5696f5a2-e040-4aa0-818d-a390c8128171","Type":"ContainerStarted","Data":"c4d7d3a8d51507f13eed5d12651b512678f65bb4a0314f0cbee9d2497c3cfe2f"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.907294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" event={"ID":"5696f5a2-e040-4aa0-818d-a390c8128171","Type":"ContainerStarted","Data":"40036ed060b061a340a77d3f0b8eedfa4b8d66389b3ed2a28db3c1fa69f08426"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.908119 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.908557 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5q2cs"] Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.911248 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" event={"ID":"79259d19-3c66-4aa6-baa6-666ee50833b2","Type":"ContainerStarted","Data":"35f160032a8f8f1da0c965ff5656ccabcc93459a6569db1c696d15646d0f8d83"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.911297 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" event={"ID":"79259d19-3c66-4aa6-baa6-666ee50833b2","Type":"ContainerStarted","Data":"ce32f62324d9c3e7ccbc235477b74f839d0d5f0c2ebf7dccec9b820f3c8d069b"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.911312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" event={"ID":"79259d19-3c66-4aa6-baa6-666ee50833b2","Type":"ContainerStarted","Data":"4abe8c31e060b239f7af0beab57bff9f94a05e4c76c47e726ee7cf8ffcc80d35"} Mar 19 16:44:20 crc kubenswrapper[4792]: W0319 16:44:20.912777 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1c9f504_b92a_4bc2_95a4_c62610a18251.slice/crio-c3092f14d46d3cd6fcf84b72a175221e064baad19e06f172edd55971ef4bc4fd WatchSource:0}: Error finding container c3092f14d46d3cd6fcf84b72a175221e064baad19e06f172edd55971ef4bc4fd: Status 404 returned error can't find the container with id c3092f14d46d3cd6fcf84b72a175221e064baad19e06f172edd55971ef4bc4fd Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.914377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" event={"ID":"b34ab160-ed91-4173-9f6d-af8e4373087a","Type":"ContainerStarted","Data":"60c176ef4b966a62f3d46afe9a0e989a55c770c518e1b9256f851eae6b29400d"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.916695 4792 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zwwzh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.916759 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" podUID="5696f5a2-e040-4aa0-818d-a390c8128171" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.918640 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-bound-sa-token\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.918811 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4012102ae2a1fd832335c4c058e03e07e1ee9f4369061c9e35890e21832f30f6"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.918860 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"730728a917f04d1b1be0f80f02643fe7d2dfa36621362b05c1dfe38a7f38278a"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.925200 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" event={"ID":"532c43ef-0391-4ff7-b26c-aeef9da10c5e","Type":"ContainerStarted","Data":"55f12a7716e082a4a818e94cbaa7fffc58a9213ca87cd131f3c72de136f0e9d5"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.925253 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" event={"ID":"532c43ef-0391-4ff7-b26c-aeef9da10c5e","Type":"ContainerStarted","Data":"ca864485b4d225e72017e7955d8b6688a28b7b1ca4bee21079e101e22cfe9ceb"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.926517 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6k44w" event={"ID":"2d1ad570-6354-44ba-802c-4860784bf053","Type":"ContainerStarted","Data":"5afaaa4b62a7048adcd3ff71e8c53cb5ce95a4773415da266b30ead37cdad47a"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.930136 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"84396508dc3efca43dd81d8f453794e3e06809d40cc80e1764e23a33ef920452"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.930184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"208b0b1ec6274e01ca2f26ac01a6899823104c54749b03e63c77822d0bd0442e"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.933045 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" event={"ID":"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7","Type":"ContainerDied","Data":"c158348fa52cc91c88b65fee527b7d58f9f73b00baac4e7cdf94788bb6166255"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.932625 4792 generic.go:334] "Generic (PLEG): container finished" podID="2a14e97e-dd33-47d5-8c93-2cd1747a0ba7" containerID="c158348fa52cc91c88b65fee527b7d58f9f73b00baac4e7cdf94788bb6166255" exitCode=0 Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.934037 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" event={"ID":"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7","Type":"ContainerStarted","Data":"a69ecff3dc812d1947c3c5201dc290be0b5dad92586d749bf1b234aa88a86c4e"} Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.942980 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmdx4\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-kube-api-access-fmdx4\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.964983 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8eb2662-5241-48e2-9a13-20e0635514ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.982324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7tbl\" (UniqueName: \"kubernetes.io/projected/36f74c49-94ef-404a-aeab-c3ef752df373-kube-api-access-q7tbl\") pod \"etcd-operator-b45778765-nwdkb\" (UID: \"36f74c49-94ef-404a-aeab-c3ef752df373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.999355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mht5\" (UniqueName: \"kubernetes.io/projected/356468d1-7817-4566-bb80-ca21f4b9ff24-kube-api-access-9mht5\") pod \"openshift-config-operator-7777fb866f-gvfqb\" (UID: \"356468d1-7817-4566-bb80-ca21f4b9ff24\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:20 crc kubenswrapper[4792]: I0319 16:44:20.999465 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.000692 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.500647502 +0000 UTC m=+224.646705052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.019645 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95mnv\" (UniqueName: \"kubernetes.io/projected/b8eb2662-5241-48e2-9a13-20e0635514ae-kube-api-access-95mnv\") pod \"ingress-operator-5b745b69d9-h9f57\" (UID: \"b8eb2662-5241-48e2-9a13-20e0635514ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.040275 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdzdx\" (UniqueName: \"kubernetes.io/projected/c43d7a6a-8816-4471-92f5-32dc458c677f-kube-api-access-tdzdx\") pod \"console-operator-58897d9998-xkgg2\" (UID: \"c43d7a6a-8816-4471-92f5-32dc458c677f\") " pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.058970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np25d\" (UniqueName: \"kubernetes.io/projected/e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4-kube-api-access-np25d\") pod \"olm-operator-6b444d44fb-zsdng\" (UID: \"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.089382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzd5w\" (UniqueName: \"kubernetes.io/projected/422112a2-a6c2-4d09-aaeb-e4f9924ed96e-kube-api-access-wzd5w\") pod \"auto-csr-approver-29565644-gg5p9\" (UID: \"422112a2-a6c2-4d09-aaeb-e4f9924ed96e\") " pod="openshift-infra/auto-csr-approver-29565644-gg5p9" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.100100 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5cxr\" (UniqueName: \"kubernetes.io/projected/6430b947-6329-4e68-9cb4-6e08ee058f70-kube-api-access-h5cxr\") pod \"authentication-operator-69f744f599-cfgxg\" (UID: \"6430b947-6329-4e68-9cb4-6e08ee058f70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.101772 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.103218 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.603206991 +0000 UTC m=+224.749264531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.108886 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.125575 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwb8s\" (UniqueName: \"kubernetes.io/projected/1fdc3fb5-f78e-4a1c-8c25-771bee54fd09-kube-api-access-fwb8s\") pod \"machine-approver-56656f9798-jb9zs\" (UID: \"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.140940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8j2\" (UniqueName: \"kubernetes.io/projected/2d09edb3-848f-4a5d-bccf-4122850cb7bb-kube-api-access-dh8j2\") pod \"packageserver-d55dfcdfc-55nsz\" (UID: \"2d09edb3-848f-4a5d-bccf-4122850cb7bb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.162035 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5mrv\" (UniqueName: \"kubernetes.io/projected/8caec22c-6b2b-4b86-904d-a7954633e59d-kube-api-access-j5mrv\") pod \"multus-admission-controller-857f4d67dd-pz5zs\" (UID: \"8caec22c-6b2b-4b86-904d-a7954633e59d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.163784 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.180179 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.190049 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cz4p\" (UniqueName: \"kubernetes.io/projected/a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7-kube-api-access-7cz4p\") pod \"catalog-operator-68c6474976-25htk\" (UID: \"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.202871 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.203859 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.703822906 +0000 UTC m=+224.849880446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.204252 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ncj\" (UniqueName: \"kubernetes.io/projected/6470e583-2fed-4638-a5b3-3213db4f4b84-kube-api-access-59ncj\") pod \"kube-storage-version-migrator-operator-b67b599dd-t6b87\" (UID: \"6470e583-2fed-4638-a5b3-3213db4f4b84\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.211771 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.217855 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.223540 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7llr7\" (UniqueName: \"kubernetes.io/projected/317303db-f645-48f1-80f5-23e798ffd8f0-kube-api-access-7llr7\") pod \"marketplace-operator-79b997595-5jwjp\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.244019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ngw8\" (UniqueName: \"kubernetes.io/projected/05d420b5-e7fb-4a41-b088-c7a8cbf91b5f-kube-api-access-8ngw8\") pod \"machine-config-operator-74547568cd-lrb5n\" (UID: \"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.248325 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.260189 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0850c733-a734-4c4b-9952-42b30f77822f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l4f68\" (UID: \"0850c733-a734-4c4b-9952-42b30f77822f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.286802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhpl\" (UniqueName: \"kubernetes.io/projected/89bffca4-d37a-4bf9-a958-f1a3c9f413e0-kube-api-access-wmhpl\") pod \"migrator-59844c95c7-sktld\" (UID: \"89bffca4-d37a-4bf9-a958-f1a3c9f413e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.290356 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.300302 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.306161 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.306718 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.806703124 +0000 UTC m=+224.952760664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.309468 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.340155 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prh96\" (UniqueName: \"kubernetes.io/projected/70747030-a75c-4fef-840e-d79555471540-kube-api-access-prh96\") pod \"machine-config-server-ckppm\" (UID: \"70747030-a75c-4fef-840e-d79555471540\") " pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.343065 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cfgxg"] Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.352356 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nqg4\" (UniqueName: \"kubernetes.io/projected/b749c00a-6a69-4782-8018-7e6f759c9575-kube-api-access-5nqg4\") pod \"downloads-7954f5f757-9qk59\" (UID: \"b749c00a-6a69-4782-8018-7e6f759c9575\") " pod="openshift-console/downloads-7954f5f757-9qk59" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.363364 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.368754 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gtf7\" (UniqueName: \"kubernetes.io/projected/79e6ba0f-7a19-4676-af04-8cbcc56ab4fa-kube-api-access-2gtf7\") pod \"cluster-samples-operator-665b6dd947-2ccxc\" (UID: \"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.377303 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9qk59" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.377828 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.381533 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.395907 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.397620 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shk9j\" (UniqueName: \"kubernetes.io/projected/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-kube-api-access-shk9j\") pod \"route-controller-manager-6576b87f9c-lbql2\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.410725 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hwp7\" (UniqueName: \"kubernetes.io/projected/e5fea090-6dce-44d1-b5bc-9dabbfa00286-kube-api-access-6hwp7\") pod \"machine-config-controller-84d6567774-dk4pz\" (UID: \"e5fea090-6dce-44d1-b5bc-9dabbfa00286\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.415660 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.418399 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.418737 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.918711221 +0000 UTC m=+225.064768771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.418962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.419506 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:21.919474551 +0000 UTC m=+225.065532091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.420599 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eea1483-0c0b-46af-94a0-856a9a25128c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-t84hr\" (UID: \"5eea1483-0c0b-46af-94a0-856a9a25128c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.427428 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.452975 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.454097 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9jqm\" (UniqueName: \"kubernetes.io/projected/f75bebd8-969f-4e62-81a3-4ff5e456ce28-kube-api-access-k9jqm\") pod \"ingress-canary-bxb9l\" (UID: \"f75bebd8-969f-4e62-81a3-4ff5e456ce28\") " pod="openshift-ingress-canary/ingress-canary-bxb9l" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.462947 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.466670 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zmhx\" (UniqueName: \"kubernetes.io/projected/e7c8fc86-569f-425e-bb93-e75a206f1e68-kube-api-access-9zmhx\") pod \"control-plane-machine-set-operator-78cbb6b69f-r6754\" (UID: \"e7c8fc86-569f-425e-bb93-e75a206f1e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.466927 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.491211 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc7gh\" (UniqueName: \"kubernetes.io/projected/6adf0a51-8344-4d3e-906b-423278cf06b7-kube-api-access-fc7gh\") pod \"dns-default-dzdmn\" (UID: \"6adf0a51-8344-4d3e-906b-423278cf06b7\") " pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.504064 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwl5m\" (UniqueName: \"kubernetes.io/projected/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-kube-api-access-dwl5m\") pod \"oauth-openshift-558db77b4-2t8f8\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.519700 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.520137 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.020123268 +0000 UTC m=+225.166180808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.520217 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bxb9l" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.523529 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7jh8\" (UniqueName: \"kubernetes.io/projected/990ccb69-2ef3-40de-a969-a985d3a60a04-kube-api-access-c7jh8\") pod \"service-ca-9c57cc56f-v5dbc\" (UID: \"990ccb69-2ef3-40de-a969-a985d3a60a04\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.528061 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.534217 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ckppm" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.543348 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92hq\" (UniqueName: \"kubernetes.io/projected/a446d1fe-6ebb-425a-8b70-b3225da28873-kube-api-access-n92hq\") pod \"package-server-manager-789f6589d5-9q2vd\" (UID: \"a446d1fe-6ebb-425a-8b70-b3225da28873\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.566600 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2n44\" (UniqueName: \"kubernetes.io/projected/3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308-kube-api-access-x2n44\") pod \"apiserver-7bbb656c7d-9clzb\" (UID: \"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.595083 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee3f5314-ad5f-4391-802e-4106ab9a6c4d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9v4gc\" (UID: \"ee3f5314-ad5f-4391-802e-4106ab9a6c4d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.601425 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrzp\" (UniqueName: \"kubernetes.io/projected/9a45e861-132e-4e80-8bf5-f48c43844b99-kube-api-access-srrzp\") pod \"service-ca-operator-777779d784-9vlf7\" (UID: \"9a45e861-132e-4e80-8bf5-f48c43844b99\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.620752 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.621157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.621416 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.121404922 +0000 UTC m=+225.267462462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.626756 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68729\" (UniqueName: \"kubernetes.io/projected/ecfa468d-32df-43ac-8884-40aad47fd099-kube-api-access-68729\") pod \"collect-profiles-29565630-8fgzl\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.626904 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.631255 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.639005 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.653289 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6gph\" (UniqueName: \"kubernetes.io/projected/d354fc8d-a39d-4d0d-bbb5-f8d72522d42e-kube-api-access-d6gph\") pod \"cluster-image-registry-operator-dc59b4c8b-d5qx8\" (UID: \"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.653656 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.654744 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.657918 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.674894 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j4sq\" (UniqueName: \"kubernetes.io/projected/03c93f52-3a7f-4fbc-921e-79ad74db2d4e-kube-api-access-2j4sq\") pod \"csi-hostpathplugin-bw2ct\" (UID: \"03c93f52-3a7f-4fbc-921e-79ad74db2d4e\") " pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.688152 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.691260 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xkgg2"] Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.723671 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.723965 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.22394997 +0000 UTC m=+225.370007500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.736183 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.742451 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.773079 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.826560 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.826927 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.326914599 +0000 UTC m=+225.472972139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.840047 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.863133 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.928872 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:21 crc kubenswrapper[4792]: E0319 16:44:21.929165 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.42915161 +0000 UTC m=+225.575209150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.952054 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565644-gg5p9"] Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.971555 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" event={"ID":"6430b947-6329-4e68-9cb4-6e08ee058f70","Type":"ContainerStarted","Data":"db89eb34a536fbe5d33b9df21b9b8a1044d7cbf8ab464e56c03333ced267d56b"} Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.974268 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" event={"ID":"b34ab160-ed91-4173-9f6d-af8e4373087a","Type":"ContainerStarted","Data":"40eac778e2c5de4ad0fdb8b5314a2fa4fbe74e72f530535f1d6469ef533df2fe"} Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.991693 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" event={"ID":"c1c9f504-b92a-4bc2-95a4-c62610a18251","Type":"ContainerStarted","Data":"a2f968025fb83c807b23b65a853004faf47597e23c9ad076ddc981dab56278db"} Mar 19 16:44:21 crc kubenswrapper[4792]: I0319 16:44:21.991737 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" event={"ID":"c1c9f504-b92a-4bc2-95a4-c62610a18251","Type":"ContainerStarted","Data":"c3092f14d46d3cd6fcf84b72a175221e064baad19e06f172edd55971ef4bc4fd"} Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:21.994148 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6k44w" event={"ID":"2d1ad570-6354-44ba-802c-4860784bf053","Type":"ContainerStarted","Data":"d1424878d070d2583a51787f3c1f3f6ec5d880eded73c60a6232d450ebf66415"} Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:21.999893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" event={"ID":"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7","Type":"ContainerStarted","Data":"a8d74d518216462c5308bde9211766c75ab3e5f99cb2b8ff83f15bb47c556590"} Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.008604 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" event={"ID":"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09","Type":"ContainerStarted","Data":"ab3b903c5c9c96e1b37553170090ced673014dea396d12135a292fb5fb9857c0"} Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.022362 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.030506 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.030885 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.530869975 +0000 UTC m=+225.676927515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: W0319 16:44:22.030995 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc43d7a6a_8816_4471_92f5_32dc458c677f.slice/crio-b3750c64df2ca29bcc254683e085f909e7274c5378adcff3d39f2d1d065ee531 WatchSource:0}: Error finding container b3750c64df2ca29bcc254683e085f909e7274c5378adcff3d39f2d1d065ee531: Status 404 returned error can't find the container with id b3750c64df2ca29bcc254683e085f909e7274c5378adcff3d39f2d1d065ee531 Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.131216 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.131443 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.631399258 +0000 UTC m=+225.777456808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.132189 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.136707 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.636686493 +0000 UTC m=+225.782744033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.216603 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.233336 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.233620 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.733605397 +0000 UTC m=+225.879662937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.324952 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nwdkb"] Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.325006 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng"] Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.335254 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.336014 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.835997361 +0000 UTC m=+225.982054901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: W0319 16:44:22.443225 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode05a55ab_b7e5_45ce_a692_9e7b0f9d96a4.slice/crio-0081a2e34befbfedbec82df7891cc1ba2229e2c3745b5c69a48406df2eda2617 WatchSource:0}: Error finding container 0081a2e34befbfedbec82df7891cc1ba2229e2c3745b5c69a48406df2eda2617: Status 404 returned error can't find the container with id 0081a2e34befbfedbec82df7891cc1ba2229e2c3745b5c69a48406df2eda2617 Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.444280 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.444551 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:22.944539684 +0000 UTC m=+226.090597214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: W0319 16:44:22.452730 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f74c49_94ef_404a_aeab_c3ef752df373.slice/crio-860f7a1d76a5b6459ef10384c8bf2b5836b64f47d92ddcd533467d21e51402f6 WatchSource:0}: Error finding container 860f7a1d76a5b6459ef10384c8bf2b5836b64f47d92ddcd533467d21e51402f6: Status 404 returned error can't find the container with id 860f7a1d76a5b6459ef10384c8bf2b5836b64f47d92ddcd533467d21e51402f6 Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.490819 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb"] Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.548768 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.558183 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.058162325 +0000 UTC m=+226.204219865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.593742 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld"] Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.623672 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57"] Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.649498 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.649850 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.149826405 +0000 UTC m=+226.295883945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.700799 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.752059 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.752577 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.252559559 +0000 UTC m=+226.398617089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.853310 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.853577 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.353561275 +0000 UTC m=+226.499618815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.941030 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-6k44w" podStartSLOduration=169.941009329 podStartE2EDuration="2m49.941009329s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:22.920164118 +0000 UTC m=+226.066221658" watchObservedRunningTime="2026-03-19 16:44:22.941009329 +0000 UTC m=+226.087066869" Mar 19 16:44:22 crc kubenswrapper[4792]: I0319 16:44:22.957731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:22 crc kubenswrapper[4792]: E0319 16:44:22.958136 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.458123087 +0000 UTC m=+226.604180627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.022892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" event={"ID":"2a14e97e-dd33-47d5-8c93-2cd1747a0ba7","Type":"ContainerStarted","Data":"b644b3d76b7dc82d819a86c4835e292b289aa6e5377fe1d859ab8c4c71b16286"} Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.036856 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" event={"ID":"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09","Type":"ContainerStarted","Data":"f9d65019a505b4e29d22ea66740e0b203240604492c3fa1c0f69395bd4705d93"} Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.039984 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" event={"ID":"6430b947-6329-4e68-9cb4-6e08ee058f70","Type":"ContainerStarted","Data":"b48a0a86781cdbf0151171134b9249d236066cb74cd31d022029151e2d40553e"} Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.040965 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" event={"ID":"36f74c49-94ef-404a-aeab-c3ef752df373","Type":"ContainerStarted","Data":"860f7a1d76a5b6459ef10384c8bf2b5836b64f47d92ddcd533467d21e51402f6"} Mar 19 16:44:23 crc kubenswrapper[4792]: W0319 16:44:23.056325 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89bffca4_d37a_4bf9_a958_f1a3c9f413e0.slice/crio-6c5b2f8736cec6383aded8d02d300bd979e5a1fd03c72cb551a3251ffcdc19a9 WatchSource:0}: Error finding container 6c5b2f8736cec6383aded8d02d300bd979e5a1fd03c72cb551a3251ffcdc19a9: Status 404 returned error can't find the container with id 6c5b2f8736cec6383aded8d02d300bd979e5a1fd03c72cb551a3251ffcdc19a9 Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.056712 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" event={"ID":"c43d7a6a-8816-4471-92f5-32dc458c677f","Type":"ContainerStarted","Data":"b3750c64df2ca29bcc254683e085f909e7274c5378adcff3d39f2d1d065ee531"} Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.059255 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.059565 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.559549165 +0000 UTC m=+226.705606695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.075545 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" event={"ID":"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4","Type":"ContainerStarted","Data":"0081a2e34befbfedbec82df7891cc1ba2229e2c3745b5c69a48406df2eda2617"} Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.117575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" event={"ID":"422112a2-a6c2-4d09-aaeb-e4f9924ed96e","Type":"ContainerStarted","Data":"58845583b672b9d5cf3ec9375de02c5f0c857637d759a3d68036b5625117f0cb"} Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.125418 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" event={"ID":"c1c9f504-b92a-4bc2-95a4-c62610a18251","Type":"ContainerStarted","Data":"09f809950d2103da4f911d9070e76c05fce43cc363a6763847acd23f512623f5"} Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.137676 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ckppm" event={"ID":"70747030-a75c-4fef-840e-d79555471540","Type":"ContainerStarted","Data":"7b9eabb6098353004019cd9d000e99dcd5bbebcebe20124a4881c312898ba9a3"} Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.171947 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.173226 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.673207748 +0000 UTC m=+226.819265278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.210228 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-q29n4" podStartSLOduration=170.210207061 podStartE2EDuration="2m50.210207061s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:23.20725201 +0000 UTC m=+226.353309550" watchObservedRunningTime="2026-03-19 16:44:23.210207061 +0000 UTC m=+226.356264601" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.274814 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.276137 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.776093416 +0000 UTC m=+226.922150946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.377685 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.378071 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.878058237 +0000 UTC m=+227.024115767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.468684 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-md9c2" podStartSLOduration=170.468665069 podStartE2EDuration="2m50.468665069s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:23.467469646 +0000 UTC m=+226.613527186" watchObservedRunningTime="2026-03-19 16:44:23.468665069 +0000 UTC m=+226.614722609" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.480678 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.481072 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:23.981050118 +0000 UTC m=+227.127107658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.509168 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7kvbs" podStartSLOduration=170.509153728 podStartE2EDuration="2m50.509153728s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:23.507525823 +0000 UTC m=+226.653583363" watchObservedRunningTime="2026-03-19 16:44:23.509153728 +0000 UTC m=+226.655211258" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.574972 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" podStartSLOduration=170.57495533 podStartE2EDuration="2m50.57495533s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:23.573086859 +0000 UTC m=+226.719144389" watchObservedRunningTime="2026-03-19 16:44:23.57495533 +0000 UTC m=+226.721012870" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.582651 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.583035 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.083023781 +0000 UTC m=+227.229081311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.623409 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-28msx" podStartSLOduration=170.623395507 podStartE2EDuration="2m50.623395507s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:23.621877044 +0000 UTC m=+226.767934584" watchObservedRunningTime="2026-03-19 16:44:23.623395507 +0000 UTC m=+226.769453047" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.683330 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.683602 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.183588785 +0000 UTC m=+227.329646315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.744936 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.745000 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.783713 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" podStartSLOduration=170.783691316 podStartE2EDuration="2m50.783691316s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:23.782332089 +0000 UTC m=+226.928389639" watchObservedRunningTime="2026-03-19 16:44:23.783691316 +0000 UTC m=+226.929748856" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.784552 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.785109 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.285083684 +0000 UTC m=+227.431141224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.837436 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5q2cs" podStartSLOduration=170.837415817 podStartE2EDuration="2m50.837415817s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:23.835166766 +0000 UTC m=+226.981224306" watchObservedRunningTime="2026-03-19 16:44:23.837415817 +0000 UTC m=+226.983473357" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.863389 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" podStartSLOduration=170.863374138 podStartE2EDuration="2m50.863374138s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:23.861018003 +0000 UTC m=+227.007075543" watchObservedRunningTime="2026-03-19 16:44:23.863374138 +0000 UTC m=+227.009431678" Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.886002 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.886300 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.386284815 +0000 UTC m=+227.532342355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.949936 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pz5zs"] Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.955277 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9qk59"] Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.987997 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:23 crc kubenswrapper[4792]: E0319 16:44:23.988281 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.488269678 +0000 UTC m=+227.634327218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:23 crc kubenswrapper[4792]: I0319 16:44:23.994051 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68"] Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.089177 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.089377 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.589349386 +0000 UTC m=+227.735406926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.089470 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.089856 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.5898273 +0000 UTC m=+227.735884840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.131457 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwjp"] Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.133278 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz"] Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.141778 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz"] Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.151562 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n"] Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.152413 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" event={"ID":"89bffca4-d37a-4bf9-a958-f1a3c9f413e0","Type":"ContainerStarted","Data":"6c5b2f8736cec6383aded8d02d300bd979e5a1fd03c72cb551a3251ffcdc19a9"} Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.154875 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk"] Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.155791 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" event={"ID":"b8eb2662-5241-48e2-9a13-20e0635514ae","Type":"ContainerStarted","Data":"911dd99865e70b3222bf5623a4776dfd5081b01e9b227d24731869744d565403"} Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.157021 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" event={"ID":"356468d1-7817-4566-bb80-ca21f4b9ff24","Type":"ContainerStarted","Data":"0aeb699f1742cf5b6487e28e5a9cb48897e91f97a6f60c8304ca3cd41e1e9b1c"} Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.190562 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.190827 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.690788735 +0000 UTC m=+227.836846275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.190967 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.191417 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.691400381 +0000 UTC m=+227.837457921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.292530 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.293884 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.793829506 +0000 UTC m=+227.939887046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.395334 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.395743 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.895722766 +0000 UTC m=+228.041780306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.496409 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.496600 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.996580788 +0000 UTC m=+228.142638328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.496682 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.497028 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:24.99701034 +0000 UTC m=+228.143067880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.597903 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.598091 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.098064418 +0000 UTC m=+228.244121958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.598429 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.598803 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.098795097 +0000 UTC m=+228.244852637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.601226 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.601295 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.603222 4792 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ms27t container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.603283 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" podUID="2a14e97e-dd33-47d5-8c93-2cd1747a0ba7" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.699622 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.699966 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.199927577 +0000 UTC m=+228.345985147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.700049 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.700482 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.200466552 +0000 UTC m=+228.346524132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.786026 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.786108 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.801303 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.801457 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.301427827 +0000 UTC m=+228.447485377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.801561 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.801984 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.301973901 +0000 UTC m=+228.448031461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.902465 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.902616 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.402587307 +0000 UTC m=+228.548644857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:24 crc kubenswrapper[4792]: I0319 16:44:24.902705 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:24 crc kubenswrapper[4792]: E0319 16:44:24.903161 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.403078371 +0000 UTC m=+228.549135931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.004593 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.004785 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.504758685 +0000 UTC m=+228.650816225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.005131 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.005450 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.505439624 +0000 UTC m=+228.651497164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.047856 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8caec22c_6b2b_4b86_904d_a7954633e59d.slice/crio-f0f574613e48628aa7320194dcf1b52978c25d7bcc33217ab91f07a00b223ef3 WatchSource:0}: Error finding container f0f574613e48628aa7320194dcf1b52978c25d7bcc33217ab91f07a00b223ef3: Status 404 returned error can't find the container with id f0f574613e48628aa7320194dcf1b52978c25d7bcc33217ab91f07a00b223ef3 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.048077 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:25 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:25 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:25 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.048305 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.049251 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb749c00a_6a69_4782_8018_7e6f759c9575.slice/crio-f00b5ae90318e7e1708808c6bf0afe8a1e81f06fafb457b47c1544df97b12cab WatchSource:0}: Error finding container f00b5ae90318e7e1708808c6bf0afe8a1e81f06fafb457b47c1544df97b12cab: Status 404 returned error can't find the container with id f00b5ae90318e7e1708808c6bf0afe8a1e81f06fafb457b47c1544df97b12cab Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.062670 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317303db_f645_48f1_80f5_23e798ffd8f0.slice/crio-2f03a53615a08b64d80438bad54ab288d0e5bbefffbe7645b86fa79679e4b407 WatchSource:0}: Error finding container 2f03a53615a08b64d80438bad54ab288d0e5bbefffbe7645b86fa79679e4b407: Status 404 returned error can't find the container with id 2f03a53615a08b64d80438bad54ab288d0e5bbefffbe7645b86fa79679e4b407 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.065813 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bxb9l"] Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.069688 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d09edb3_848f_4a5d_bccf_4122850cb7bb.slice/crio-bc08371139c49f3f21aceb1a86b2d6f49865e804a251fbeb7bad0619dcbf387b WatchSource:0}: Error finding container bc08371139c49f3f21aceb1a86b2d6f49865e804a251fbeb7bad0619dcbf387b: Status 404 returned error can't find the container with id bc08371139c49f3f21aceb1a86b2d6f49865e804a251fbeb7bad0619dcbf387b Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.072647 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.076650 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dzdmn"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.078745 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.086202 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.088854 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.112425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.112780 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.612760192 +0000 UTC m=+228.758817732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.112996 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.130431 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.134337 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.136063 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.139768 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.144732 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v5dbc"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.146849 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bw2ct"] Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.149105 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75bebd8_969f_4e62_81a3_4ff5e456ce28.slice/crio-7351304041ac96ef8a9fa022fee0a58f8fcd8b20ff024159ff7a35fdf2f04141 WatchSource:0}: Error finding container 7351304041ac96ef8a9fa022fee0a58f8fcd8b20ff024159ff7a35fdf2f04141: Status 404 returned error can't find the container with id 7351304041ac96ef8a9fa022fee0a58f8fcd8b20ff024159ff7a35fdf2f04141 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.151576 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754"] Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.152449 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6adf0a51_8344_4d3e_906b_423278cf06b7.slice/crio-9334942525c159803c0d5065c134a3a6441e6ec5bd042664cd007b9123fdb2e4 WatchSource:0}: Error finding container 9334942525c159803c0d5065c134a3a6441e6ec5bd042664cd007b9123fdb2e4: Status 404 returned error can't find the container with id 9334942525c159803c0d5065c134a3a6441e6ec5bd042664cd007b9123fdb2e4 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.155307 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2t8f8"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.156179 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2"] Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.157601 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee3f5314_ad5f_4391_802e_4106ab9a6c4d.slice/crio-dc8e58bcef82976b3f4ef01b4da8154290e90e61a9c88cfd213531f12a2c4bf7 WatchSource:0}: Error finding container dc8e58bcef82976b3f4ef01b4da8154290e90e61a9c88cfd213531f12a2c4bf7: Status 404 returned error can't find the container with id dc8e58bcef82976b3f4ef01b4da8154290e90e61a9c88cfd213531f12a2c4bf7 Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.164792 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6470e583_2fed_4638_a5b3_3213db4f4b84.slice/crio-0b966a84456551fbf03e919b42812ad7a6c7e2a9759c965eaa4fe536b0ecada9 WatchSource:0}: Error finding container 0b966a84456551fbf03e919b42812ad7a6c7e2a9759c965eaa4fe536b0ecada9: Status 404 returned error can't find the container with id 0b966a84456551fbf03e919b42812ad7a6c7e2a9759c965eaa4fe536b0ecada9 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.172609 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9qk59" event={"ID":"b749c00a-6a69-4782-8018-7e6f759c9575","Type":"ContainerStarted","Data":"f00b5ae90318e7e1708808c6bf0afe8a1e81f06fafb457b47c1544df97b12cab"} Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.176701 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfa468d_32df_43ac_8884_40aad47fd099.slice/crio-854a7152bd80af7f301447247dbf0abb79bce0a0a04fe40ac7fbe8c1632490d9 WatchSource:0}: Error finding container 854a7152bd80af7f301447247dbf0abb79bce0a0a04fe40ac7fbe8c1632490d9: Status 404 returned error can't find the container with id 854a7152bd80af7f301447247dbf0abb79bce0a0a04fe40ac7fbe8c1632490d9 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.178191 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" event={"ID":"0850c733-a734-4c4b-9952-42b30f77822f","Type":"ContainerStarted","Data":"511fb3054c5bee024ac088a7c86214522591e43f085f8133c87965c660ca2bf9"} Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.179283 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd354fc8d_a39d_4d0d_bbb5_f8d72522d42e.slice/crio-9e94615d2b509f98a0cd158e0cf8460d54cc08b0dec532e37ce859075ba4fc13 WatchSource:0}: Error finding container 9e94615d2b509f98a0cd158e0cf8460d54cc08b0dec532e37ce859075ba4fc13: Status 404 returned error can't find the container with id 9e94615d2b509f98a0cd158e0cf8460d54cc08b0dec532e37ce859075ba4fc13 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.179698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" event={"ID":"e5fea090-6dce-44d1-b5bc-9dabbfa00286","Type":"ContainerStarted","Data":"47aff755da37693c25e66d38a0d806c7793ca46407e917557ad1b596fdd36a22"} Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.180770 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzdmn" event={"ID":"6adf0a51-8344-4d3e-906b-423278cf06b7","Type":"ContainerStarted","Data":"9334942525c159803c0d5065c134a3a6441e6ec5bd042664cd007b9123fdb2e4"} Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.181356 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eea1483_0c0b_46af_94a0_856a9a25128c.slice/crio-68b21674b3a08657aff1e8bd490f2798c2c855047312c5655dac826e6b9bfebb WatchSource:0}: Error finding container 68b21674b3a08657aff1e8bd490f2798c2c855047312c5655dac826e6b9bfebb: Status 404 returned error can't find the container with id 68b21674b3a08657aff1e8bd490f2798c2c855047312c5655dac826e6b9bfebb Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.182028 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" event={"ID":"8caec22c-6b2b-4b86-904d-a7954633e59d","Type":"ContainerStarted","Data":"f0f574613e48628aa7320194dcf1b52978c25d7bcc33217ab91f07a00b223ef3"} Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.184176 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" event={"ID":"9a45e861-132e-4e80-8bf5-f48c43844b99","Type":"ContainerStarted","Data":"9bef6edb4fe27e52fba0492c389952fdecd40a4df28175b58217a448908562f0"} Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.185020 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" event={"ID":"ee3f5314-ad5f-4391-802e-4106ab9a6c4d","Type":"ContainerStarted","Data":"dc8e58bcef82976b3f4ef01b4da8154290e90e61a9c88cfd213531f12a2c4bf7"} Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.186773 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" event={"ID":"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7","Type":"ContainerStarted","Data":"f520fb9796c73f29250f01da9a2af96443bd7cf63c1d38af67efd2bb75e9d05d"} Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.187647 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" event={"ID":"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f","Type":"ContainerStarted","Data":"3182fb94d9d732f0658de0ea5919b697c724b527c44800c261bdfb2452fd504a"} Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.201170 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" event={"ID":"2d09edb3-848f-4a5d-bccf-4122850cb7bb","Type":"ContainerStarted","Data":"bc08371139c49f3f21aceb1a86b2d6f49865e804a251fbeb7bad0619dcbf387b"} Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.202533 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bxb9l" event={"ID":"f75bebd8-969f-4e62-81a3-4ff5e456ce28","Type":"ContainerStarted","Data":"7351304041ac96ef8a9fa022fee0a58f8fcd8b20ff024159ff7a35fdf2f04141"} Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.202930 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03c93f52_3a7f_4fbc_921e_79ad74db2d4e.slice/crio-e65818a118aa3dffd96e40a75ed59c6b3851c23a0dfa30f43991b76ec4cc74a3 WatchSource:0}: Error finding container e65818a118aa3dffd96e40a75ed59c6b3851c23a0dfa30f43991b76ec4cc74a3: Status 404 returned error can't find the container with id e65818a118aa3dffd96e40a75ed59c6b3851c23a0dfa30f43991b76ec4cc74a3 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.203721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" event={"ID":"317303db-f645-48f1-80f5-23e798ffd8f0","Type":"ContainerStarted","Data":"2f03a53615a08b64d80438bad54ab288d0e5bbefffbe7645b86fa79679e4b407"} Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.207184 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod990ccb69_2ef3_40de_a969_a985d3a60a04.slice/crio-e652dd5331c1c01395209510e7aa5deb0f1ce89f195229473cd8602b0ae3064a WatchSource:0}: Error finding container e652dd5331c1c01395209510e7aa5deb0f1ce89f195229473cd8602b0ae3064a: Status 404 returned error can't find the container with id e652dd5331c1c01395209510e7aa5deb0f1ce89f195229473cd8602b0ae3064a Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.210392 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ckppm" event={"ID":"70747030-a75c-4fef-840e-d79555471540","Type":"ContainerStarted","Data":"c0e78732006b965d9aa2228d28d5bfb96022353cb656ebce3f913ad5c5e2dd98"} Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.213999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.214678 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.714660033 +0000 UTC m=+228.860717663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: W0319 16:44:25.229178 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9de26e1_dfe8_43dc_bc11_568c2d5dee2d.slice/crio-1f961a0259857bf4848e38d7dae10a403d348cc1a835b16113b2832d72218ca3 WatchSource:0}: Error finding container 1f961a0259857bf4848e38d7dae10a403d348cc1a835b16113b2832d72218ca3: Status 404 returned error can't find the container with id 1f961a0259857bf4848e38d7dae10a403d348cc1a835b16113b2832d72218ca3 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.325670 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.326431 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.826376863 +0000 UTC m=+228.972434413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.326566 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.327368 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.827345809 +0000 UTC m=+228.973403359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.428151 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.428488 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.928454808 +0000 UTC m=+229.074512348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.429168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.429593 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:25.929576929 +0000 UTC m=+229.075634469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.530232 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.530506 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.030478972 +0000 UTC m=+229.176536512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.530644 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.531132 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.031115029 +0000 UTC m=+229.177172569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.632037 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.632205 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.132183217 +0000 UTC m=+229.278240757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.632272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.632576 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.132561978 +0000 UTC m=+229.278619518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.703489 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:25 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:25 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:25 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.703558 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.733425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.733666 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.233615465 +0000 UTC m=+229.379673015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.734147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.734500 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.234490198 +0000 UTC m=+229.380547748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.835300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.835744 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.335722621 +0000 UTC m=+229.481780161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.866768 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zwwzh"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.867066 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" podUID="5696f5a2-e040-4aa0-818d-a390c8128171" containerName="controller-manager" containerID="cri-o://c4d7d3a8d51507f13eed5d12651b512678f65bb4a0314f0cbee9d2497c3cfe2f" gracePeriod=30 Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.903622 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2"] Mar 19 16:44:25 crc kubenswrapper[4792]: I0319 16:44:25.936665 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:25 crc kubenswrapper[4792]: E0319 16:44:25.937067 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.437050966 +0000 UTC m=+229.583108506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.037334 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.037545 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.537490407 +0000 UTC m=+229.683547947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.139081 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.139869 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.63985714 +0000 UTC m=+229.785914680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.240811 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.241214 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.741193245 +0000 UTC m=+229.887250785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.251543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" event={"ID":"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308","Type":"ContainerStarted","Data":"466691adef5564b73be6a078fc69cfc5b15df78354b24d0f72f42e941bf90810"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.261622 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" event={"ID":"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d","Type":"ContainerStarted","Data":"1f961a0259857bf4848e38d7dae10a403d348cc1a835b16113b2832d72218ca3"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.263883 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" event={"ID":"990ccb69-2ef3-40de-a969-a985d3a60a04","Type":"ContainerStarted","Data":"e652dd5331c1c01395209510e7aa5deb0f1ce89f195229473cd8602b0ae3064a"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.265530 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" event={"ID":"a446d1fe-6ebb-425a-8b70-b3225da28873","Type":"ContainerStarted","Data":"965b4a41f6de018a6639860d5f1a3c599ab470e20a5c19541c9d1c2a7fa8b66d"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.271066 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" event={"ID":"1fdc3fb5-f78e-4a1c-8c25-771bee54fd09","Type":"ContainerStarted","Data":"553d47c7660b50c8ad37895a5e6b1a0155322a164ce788cee0afa02a8517c742"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.274630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" event={"ID":"36f74c49-94ef-404a-aeab-c3ef752df373","Type":"ContainerStarted","Data":"1f181a1520fa3969f93f945c359b31aff9c79bfeb024bc1add2920ff505b440c"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.276862 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" event={"ID":"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794","Type":"ContainerStarted","Data":"203253d1144db133ca0303855d79494ba1a15b21a9907c8cdda1a9412cdb7795"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.284294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" event={"ID":"356468d1-7817-4566-bb80-ca21f4b9ff24","Type":"ContainerStarted","Data":"c94669e020474b6e01c61b45891b4e40ac9cd7b858652bcba1aeed08b4ec5781"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.287200 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jb9zs" podStartSLOduration=173.287180744 podStartE2EDuration="2m53.287180744s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:26.285068387 +0000 UTC m=+229.431125927" watchObservedRunningTime="2026-03-19 16:44:26.287180744 +0000 UTC m=+229.433238284" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.289974 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" event={"ID":"c43d7a6a-8816-4471-92f5-32dc458c677f","Type":"ContainerStarted","Data":"11e485c31354616747eaecd1f143ecee5fb729fd819611c2c682454b9488c12b"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.291444 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.292962 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.293029 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.324430 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" event={"ID":"b8eb2662-5241-48e2-9a13-20e0635514ae","Type":"ContainerStarted","Data":"76c99a16179a32852be3c52c0f1867694d44eb882ce0792c8dd5cd2677bf89d7"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.327552 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" event={"ID":"e7c8fc86-569f-425e-bb93-e75a206f1e68","Type":"ContainerStarted","Data":"19363e6f5be548a549aa63667129019dc2b1b2164cb21d5ac92d68a780bf2215"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.327596 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" event={"ID":"e7c8fc86-569f-425e-bb93-e75a206f1e68","Type":"ContainerStarted","Data":"dc253f7d820c9b2067a7361471e680cea3ec5dbf7e0bad0dba147a2ccf061d81"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.329477 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" event={"ID":"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e","Type":"ContainerStarted","Data":"9e94615d2b509f98a0cd158e0cf8460d54cc08b0dec532e37ce859075ba4fc13"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.332228 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" event={"ID":"6470e583-2fed-4638-a5b3-3213db4f4b84","Type":"ContainerStarted","Data":"0b966a84456551fbf03e919b42812ad7a6c7e2a9759c965eaa4fe536b0ecada9"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.340630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" event={"ID":"03c93f52-3a7f-4fbc-921e-79ad74db2d4e","Type":"ContainerStarted","Data":"e65818a118aa3dffd96e40a75ed59c6b3851c23a0dfa30f43991b76ec4cc74a3"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.342438 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.343604 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.843586148 +0000 UTC m=+229.989643688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.356279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" event={"ID":"5eea1483-0c0b-46af-94a0-856a9a25128c","Type":"ContainerStarted","Data":"68b21674b3a08657aff1e8bd490f2798c2c855047312c5655dac826e6b9bfebb"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.358737 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" event={"ID":"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4","Type":"ContainerStarted","Data":"42ddb4c03055c27ed6d572924fb639305690a2cad78a583ce733459b626c8fda"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.361119 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.361122 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.361160 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.365453 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" event={"ID":"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa","Type":"ContainerStarted","Data":"b10275657ff7587db0a55f62d1bffd10f3cc56cecfeac6203c56c2fd036914f9"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.367575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" event={"ID":"ecfa468d-32df-43ac-8884-40aad47fd099","Type":"ContainerStarted","Data":"854a7152bd80af7f301447247dbf0abb79bce0a0a04fe40ac7fbe8c1632490d9"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.371363 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" event={"ID":"89bffca4-d37a-4bf9-a958-f1a3c9f413e0","Type":"ContainerStarted","Data":"5d60e4285acbe8840e4e0a5031673ad838c60f2bed5dcb961f79e6f187173879"} Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.380713 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podStartSLOduration=173.380695355 podStartE2EDuration="2m53.380695355s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:26.319813077 +0000 UTC m=+229.465870627" watchObservedRunningTime="2026-03-19 16:44:26.380695355 +0000 UTC m=+229.526752895" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.397933 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ckppm" podStartSLOduration=8.397909156 podStartE2EDuration="8.397909156s" podCreationTimestamp="2026-03-19 16:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:26.397390262 +0000 UTC m=+229.543447802" watchObservedRunningTime="2026-03-19 16:44:26.397909156 +0000 UTC m=+229.543966696" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.398859 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podStartSLOduration=173.398833612 podStartE2EDuration="2m53.398833612s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:26.381080205 +0000 UTC m=+229.527137745" watchObservedRunningTime="2026-03-19 16:44:26.398833612 +0000 UTC m=+229.544891162" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.443451 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.443594 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.943565437 +0000 UTC m=+230.089622977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.443800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.444941 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:26.944921444 +0000 UTC m=+230.090978984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.545099 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.545327 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.045287462 +0000 UTC m=+230.191345002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.545397 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.546042 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.046034793 +0000 UTC m=+230.192092333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.646417 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.646696 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.146648978 +0000 UTC m=+230.292706518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.646789 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.647130 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.147115731 +0000 UTC m=+230.293173271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.703109 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:26 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:26 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:26 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.703207 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.748977 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.749240 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.249204477 +0000 UTC m=+230.395262027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.749428 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.750038 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.250027229 +0000 UTC m=+230.396084769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.850363 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.850497 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.350471279 +0000 UTC m=+230.496528829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.850694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.851686 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.35157029 +0000 UTC m=+230.497627940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.951261 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.951580 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.451554558 +0000 UTC m=+230.597612098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:26 crc kubenswrapper[4792]: I0319 16:44:26.951931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:26 crc kubenswrapper[4792]: E0319 16:44:26.952353 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.452331849 +0000 UTC m=+230.598389409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.052532 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.052791 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.552745479 +0000 UTC m=+230.698803029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.052926 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.053318 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.553303144 +0000 UTC m=+230.699360684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.153539 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.153703 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.653671013 +0000 UTC m=+230.799728563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.153775 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.154086 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.654073684 +0000 UTC m=+230.800131224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.254867 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.255061 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.755030318 +0000 UTC m=+230.901087858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.255147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.255473 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.755465681 +0000 UTC m=+230.901523221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.356498 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.356679 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.856640101 +0000 UTC m=+231.002697641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.357044 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.357170 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.857157415 +0000 UTC m=+231.003214955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.377279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" event={"ID":"a446d1fe-6ebb-425a-8b70-b3225da28873","Type":"ContainerStarted","Data":"52e616e85afe2af243d3d4b365bf6ecc4c7008c0dc43874d766ee3616a2f0251"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.378496 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" event={"ID":"2d09edb3-848f-4a5d-bccf-4122850cb7bb","Type":"ContainerStarted","Data":"d18a2c9d1de6dee35b071bab6c01a888ffb725f1358fb4097efbc5fc4ae06690"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.380077 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" event={"ID":"8caec22c-6b2b-4b86-904d-a7954633e59d","Type":"ContainerStarted","Data":"d7f647dded423d50b72ddac8f3e31966dd201910bd9e549f45c543f3d8ae2abf"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.381424 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" event={"ID":"990ccb69-2ef3-40de-a969-a985d3a60a04","Type":"ContainerStarted","Data":"11db4d36c485c56a4f91b74b866e77d85607feac191802b8416421a103a334dd"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.382707 4792 generic.go:334] "Generic (PLEG): container finished" podID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerID="c94669e020474b6e01c61b45891b4e40ac9cd7b858652bcba1aeed08b4ec5781" exitCode=0 Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.382742 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" event={"ID":"356468d1-7817-4566-bb80-ca21f4b9ff24","Type":"ContainerDied","Data":"c94669e020474b6e01c61b45891b4e40ac9cd7b858652bcba1aeed08b4ec5781"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.385096 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" event={"ID":"9a45e861-132e-4e80-8bf5-f48c43844b99","Type":"ContainerStarted","Data":"67b7a9f3c2070fb38b61795138f47091a468cbccf43f2d80474793f016775c43"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.386863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" event={"ID":"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f","Type":"ContainerStarted","Data":"f9663a17f1aef44dd69336eb8741f27db14f8cb3a77973fb523ca02d7e4c5017"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.388618 4792 generic.go:334] "Generic (PLEG): container finished" podID="5696f5a2-e040-4aa0-818d-a390c8128171" containerID="c4d7d3a8d51507f13eed5d12651b512678f65bb4a0314f0cbee9d2497c3cfe2f" exitCode=0 Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.388691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" event={"ID":"5696f5a2-e040-4aa0-818d-a390c8128171","Type":"ContainerDied","Data":"c4d7d3a8d51507f13eed5d12651b512678f65bb4a0314f0cbee9d2497c3cfe2f"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.390123 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9qk59" event={"ID":"b749c00a-6a69-4782-8018-7e6f759c9575","Type":"ContainerStarted","Data":"b71cd4e274004b910f897712d6137d6267ab4293be9d431db23003675d7dae53"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.392953 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" event={"ID":"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d","Type":"ContainerStarted","Data":"2b056efa916f5f8da6f4a53b66925e4945d5643bfcbea32b47cb9f0aea8ea41d"} Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.393383 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.393434 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.394132 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.394173 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.416905 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r6754" podStartSLOduration=174.416881671 podStartE2EDuration="2m54.416881671s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:27.411649167 +0000 UTC m=+230.557706717" watchObservedRunningTime="2026-03-19 16:44:27.416881671 +0000 UTC m=+230.562939211" Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.432300 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nwdkb" podStartSLOduration=174.432281193 podStartE2EDuration="2m54.432281193s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:27.431224884 +0000 UTC m=+230.577282434" watchObservedRunningTime="2026-03-19 16:44:27.432281193 +0000 UTC m=+230.578338733" Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.458860 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.464297 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:27.959387565 +0000 UTC m=+231.105445125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.563426 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.563790 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.063774384 +0000 UTC m=+231.209831924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.664050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.664286 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.164241345 +0000 UTC m=+231.310298895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.664803 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.665263 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.165251552 +0000 UTC m=+231.311309082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.706986 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:27 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:27 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:27 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.707284 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.765862 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.766154 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.266136055 +0000 UTC m=+231.412193595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.866974 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.867469 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.36745605 +0000 UTC m=+231.513513590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.968226 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.968363 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.468341683 +0000 UTC m=+231.614399223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:27 crc kubenswrapper[4792]: I0319 16:44:27.968399 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:27 crc kubenswrapper[4792]: E0319 16:44:27.968688 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.468681022 +0000 UTC m=+231.614738562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.068750 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.068926 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.568906007 +0000 UTC m=+231.714963547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.069345 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.069950 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.569924804 +0000 UTC m=+231.715982344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.171273 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.171629 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.671596849 +0000 UTC m=+231.817654389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.172178 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.172721 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.672697739 +0000 UTC m=+231.818755279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.202787 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.229890 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bb64b865b-wvmq6"] Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.230376 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5696f5a2-e040-4aa0-818d-a390c8128171" containerName="controller-manager" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.230474 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5696f5a2-e040-4aa0-818d-a390c8128171" containerName="controller-manager" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.230653 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5696f5a2-e040-4aa0-818d-a390c8128171" containerName="controller-manager" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.231201 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.250354 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bb64b865b-wvmq6"] Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273122 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-proxy-ca-bundles\") pod \"5696f5a2-e040-4aa0-818d-a390c8128171\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273194 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-client-ca\") pod \"5696f5a2-e040-4aa0-818d-a390c8128171\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273239 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5696f5a2-e040-4aa0-818d-a390c8128171-serving-cert\") pod \"5696f5a2-e040-4aa0-818d-a390c8128171\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273280 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8gs4\" (UniqueName: \"kubernetes.io/projected/5696f5a2-e040-4aa0-818d-a390c8128171-kube-api-access-x8gs4\") pod \"5696f5a2-e040-4aa0-818d-a390c8128171\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273444 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-config\") pod \"5696f5a2-e040-4aa0-818d-a390c8128171\" (UID: \"5696f5a2-e040-4aa0-818d-a390c8128171\") " Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273610 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkwkf\" (UniqueName: \"kubernetes.io/projected/a2d1a8e0-5e00-4f64-b171-213fa622a25d-kube-api-access-fkwkf\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273651 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2d1a8e0-5e00-4f64-b171-213fa622a25d-serving-cert\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273681 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-config\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273716 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-proxy-ca-bundles\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.273757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-client-ca\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.274690 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5696f5a2-e040-4aa0-818d-a390c8128171" (UID: "5696f5a2-e040-4aa0-818d-a390c8128171"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.275279 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-client-ca" (OuterVolumeSpecName: "client-ca") pod "5696f5a2-e040-4aa0-818d-a390c8128171" (UID: "5696f5a2-e040-4aa0-818d-a390c8128171"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.275388 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-config" (OuterVolumeSpecName: "config") pod "5696f5a2-e040-4aa0-818d-a390c8128171" (UID: "5696f5a2-e040-4aa0-818d-a390c8128171"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.275491 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.775471923 +0000 UTC m=+231.921529503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.284472 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5696f5a2-e040-4aa0-818d-a390c8128171-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5696f5a2-e040-4aa0-818d-a390c8128171" (UID: "5696f5a2-e040-4aa0-818d-a390c8128171"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.288816 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5696f5a2-e040-4aa0-818d-a390c8128171-kube-api-access-x8gs4" (OuterVolumeSpecName: "kube-api-access-x8gs4") pod "5696f5a2-e040-4aa0-818d-a390c8128171" (UID: "5696f5a2-e040-4aa0-818d-a390c8128171"). InnerVolumeSpecName "kube-api-access-x8gs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.374726 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-client-ca\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.374858 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.374932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwkf\" (UniqueName: \"kubernetes.io/projected/a2d1a8e0-5e00-4f64-b171-213fa622a25d-kube-api-access-fkwkf\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.374959 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2d1a8e0-5e00-4f64-b171-213fa622a25d-serving-cert\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.374993 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-config\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.375026 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-proxy-ca-bundles\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.375092 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.375120 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.375134 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5696f5a2-e040-4aa0-818d-a390c8128171-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.375146 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5696f5a2-e040-4aa0-818d-a390c8128171-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.375156 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8gs4\" (UniqueName: \"kubernetes.io/projected/5696f5a2-e040-4aa0-818d-a390c8128171-kube-api-access-x8gs4\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.376298 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.876279544 +0000 UTC m=+232.022337084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.399338 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-client-ca\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.399617 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-config\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.400000 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-proxy-ca-bundles\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.400757 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" event={"ID":"b8eb2662-5241-48e2-9a13-20e0635514ae","Type":"ContainerStarted","Data":"cd0830b1cdb7f60c7fd2947828e8c8429e56b46fb6d825bc7b04b517ff7ba789"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.404379 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2d1a8e0-5e00-4f64-b171-213fa622a25d-serving-cert\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.417149 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" event={"ID":"317303db-f645-48f1-80f5-23e798ffd8f0","Type":"ContainerStarted","Data":"f20db64ee969bda57ee51ed8b3f4f20a8c6c5fd5908bf8dc058ceff82bb28069"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.425270 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" event={"ID":"e5fea090-6dce-44d1-b5bc-9dabbfa00286","Type":"ContainerStarted","Data":"afa67c87e0cd3290f9168fc186cc8a510b86704e7bd9ff636529826c1df073bb"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.426738 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" event={"ID":"6470e583-2fed-4638-a5b3-3213db4f4b84","Type":"ContainerStarted","Data":"d9244f955357ada316f0247c3c4331e2d387b9ee414ce500c85aa0edc9b34762"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.430091 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" event={"ID":"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7","Type":"ContainerStarted","Data":"92e84560dec79b1626ba56020f8300728bdd62b674d120abf5cabce801eafeb2"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.431350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" event={"ID":"0850c733-a734-4c4b-9952-42b30f77822f","Type":"ContainerStarted","Data":"e77a1a53f324e53b08ab4f32efbdc51d0b97d1ffd0f5fc517afffcc96c6b10b0"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.432355 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bxb9l" event={"ID":"f75bebd8-969f-4e62-81a3-4ff5e456ce28","Type":"ContainerStarted","Data":"be4e610d834a1c4f0639810eed1cae521248c886dc86d27c2b00e3911abec78c"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.434057 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzdmn" event={"ID":"6adf0a51-8344-4d3e-906b-423278cf06b7","Type":"ContainerStarted","Data":"26b2b2291787620d508535f1b83a1d8d605e18b1cedaa7abe07503991155011c"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.434957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" event={"ID":"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794","Type":"ContainerStarted","Data":"71a19a20836a76c9b37c685b28611110d253bb69c4e3a0abbec524f4187f47ce"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.435855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" event={"ID":"5eea1483-0c0b-46af-94a0-856a9a25128c","Type":"ContainerStarted","Data":"a15f777773898aae3b0f322fc66c667560ee2864c1d3b1a810350b8aea05cd16"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.437235 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" event={"ID":"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308","Type":"ContainerStarted","Data":"a2abf98b4a8d292733a5ff6974573edcccb514db5f3c3e9c5f1c84c98555bea0"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.443824 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwkf\" (UniqueName: \"kubernetes.io/projected/a2d1a8e0-5e00-4f64-b171-213fa622a25d-kube-api-access-fkwkf\") pod \"controller-manager-5bb64b865b-wvmq6\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.452490 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" event={"ID":"d354fc8d-a39d-4d0d-bbb5-f8d72522d42e","Type":"ContainerStarted","Data":"13c1d34c942580294c5c1c386e18804ec2666720d010c3f46051d0e227252213"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.457607 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bxb9l" podStartSLOduration=10.45758418 podStartE2EDuration="10.45758418s" podCreationTimestamp="2026-03-19 16:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:28.456132201 +0000 UTC m=+231.602189741" watchObservedRunningTime="2026-03-19 16:44:28.45758418 +0000 UTC m=+231.603641720" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.467290 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" event={"ID":"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa","Type":"ContainerStarted","Data":"0402a4761ed0dc9ca270c6a7695f9df14976fe492de440bdb8dee3b1f239e1ee"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.476034 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.476380 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:28.976362805 +0000 UTC m=+232.122420345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.478807 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5qx8" podStartSLOduration=175.478784561 podStartE2EDuration="2m55.478784561s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:28.478157874 +0000 UTC m=+231.624215414" watchObservedRunningTime="2026-03-19 16:44:28.478784561 +0000 UTC m=+231.624842101" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.488184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" event={"ID":"ee3f5314-ad5f-4391-802e-4106ab9a6c4d","Type":"ContainerStarted","Data":"29d55b11c08edaa7abd1e5f79177484404ae8f81f8fb904836c0391cb1b14242"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.489975 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" event={"ID":"5696f5a2-e040-4aa0-818d-a390c8128171","Type":"ContainerDied","Data":"40036ed060b061a340a77d3f0b8eedfa4b8d66389b3ed2a28db3c1fa69f08426"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.490032 4792 scope.go:117] "RemoveContainer" containerID="c4d7d3a8d51507f13eed5d12651b512678f65bb4a0314f0cbee9d2497c3cfe2f" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.490165 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zwwzh" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.497249 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" event={"ID":"ecfa468d-32df-43ac-8884-40aad47fd099","Type":"ContainerStarted","Data":"9ad04bd5c1c44315e2e6a7226b93f4435d7e189f7f6fb7df9f8221bac04bd8c7"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.505429 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" podUID="b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" containerName="route-controller-manager" containerID="cri-o://2b056efa916f5f8da6f4a53b66925e4945d5643bfcbea32b47cb9f0aea8ea41d" gracePeriod=30 Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.505805 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" event={"ID":"89bffca4-d37a-4bf9-a958-f1a3c9f413e0","Type":"ContainerStarted","Data":"464715cd1e592244bd02661c519c284b73e978f38cafac5b68e2cb8620f0a9ac"} Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.508036 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.508115 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.508170 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.508228 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.508305 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.508331 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.508348 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.509106 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.509340 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9qk59" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.510109 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-9qk59 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.510144 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9qk59" podUID="b749c00a-6a69-4782-8018-7e6f759c9575" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.510172 4792 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lbql2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.510232 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" podUID="b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.527545 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" podStartSLOduration=175.527526486 podStartE2EDuration="2m55.527526486s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:28.525290335 +0000 UTC m=+231.671347875" watchObservedRunningTime="2026-03-19 16:44:28.527526486 +0000 UTC m=+231.673584026" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.556221 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.558493 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9qk59" podStartSLOduration=175.558482804 podStartE2EDuration="2m55.558482804s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:28.540620374 +0000 UTC m=+231.686677914" watchObservedRunningTime="2026-03-19 16:44:28.558482804 +0000 UTC m=+231.704540344" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.582252 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9vlf7" podStartSLOduration=175.582239164 podStartE2EDuration="2m55.582239164s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:28.580292411 +0000 UTC m=+231.726349951" watchObservedRunningTime="2026-03-19 16:44:28.582239164 +0000 UTC m=+231.728296704" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.584745 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.586294 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.086275345 +0000 UTC m=+232.232332885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.668416 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podStartSLOduration=175.668383704 podStartE2EDuration="2m55.668383704s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:28.660104797 +0000 UTC m=+231.806162357" watchObservedRunningTime="2026-03-19 16:44:28.668383704 +0000 UTC m=+231.814441244" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.700181 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.703074 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.203045212 +0000 UTC m=+232.349102752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.713583 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zwwzh"] Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.716067 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zwwzh"] Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.718087 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:28 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:28 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:28 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.718145 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.722848 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9de26e1_dfe8_43dc_bc11_568c2d5dee2d.slice/crio-2b056efa916f5f8da6f4a53b66925e4945d5643bfcbea32b47cb9f0aea8ea41d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5696f5a2_e040_4aa0_818d_a390c8128171.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5696f5a2_e040_4aa0_818d_a390c8128171.slice/crio-40036ed060b061a340a77d3f0b8eedfa4b8d66389b3ed2a28db3c1fa69f08426\": RecentStats: unable to find data in memory cache]" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.802544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.802930 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.302914988 +0000 UTC m=+232.448972528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.891739 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-v5dbc" podStartSLOduration=175.891723209 podStartE2EDuration="2m55.891723209s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:28.724673095 +0000 UTC m=+231.870730645" watchObservedRunningTime="2026-03-19 16:44:28.891723209 +0000 UTC m=+232.037780749" Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.892769 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bb64b865b-wvmq6"] Mar 19 16:44:28 crc kubenswrapper[4792]: W0319 16:44:28.899285 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2d1a8e0_5e00_4f64_b171_213fa622a25d.slice/crio-6df8ba1fc37fc5afe553871543f162358f589b1a7aadae6f32c8669632408606 WatchSource:0}: Error finding container 6df8ba1fc37fc5afe553871543f162358f589b1a7aadae6f32c8669632408606: Status 404 returned error can't find the container with id 6df8ba1fc37fc5afe553871543f162358f589b1a7aadae6f32c8669632408606 Mar 19 16:44:28 crc kubenswrapper[4792]: I0319 16:44:28.903114 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:28 crc kubenswrapper[4792]: E0319 16:44:28.903673 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.403658557 +0000 UTC m=+232.549716097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.004555 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.005174 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.505146985 +0000 UTC m=+232.651204515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.105806 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.106033 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.605991787 +0000 UTC m=+232.752049327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.106285 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.106580 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.606564703 +0000 UTC m=+232.752622243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.207000 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.207099 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.707084195 +0000 UTC m=+232.853141735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.207298 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.207963 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.707934249 +0000 UTC m=+232.853991809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.308965 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.309403 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.809379847 +0000 UTC m=+232.955437388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.410316 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.411275 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:29.911257817 +0000 UTC m=+233.057315357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.512116 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.512295 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.012267463 +0000 UTC m=+233.158325003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.512428 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.512765 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.012757607 +0000 UTC m=+233.158815147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.514101 4792 generic.go:334] "Generic (PLEG): container finished" podID="3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308" containerID="a2abf98b4a8d292733a5ff6974573edcccb514db5f3c3e9c5f1c84c98555bea0" exitCode=0 Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.514931 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" event={"ID":"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308","Type":"ContainerDied","Data":"a2abf98b4a8d292733a5ff6974573edcccb514db5f3c3e9c5f1c84c98555bea0"} Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.519199 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" event={"ID":"8caec22c-6b2b-4b86-904d-a7954633e59d","Type":"ContainerStarted","Data":"01bc58f579d129f044ca63f8186f459df9ef73ff8387486c92e5bf6c73f56413"} Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.522344 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6576b87f9c-lbql2_b9de26e1-dfe8-43dc-bc11-568c2d5dee2d/route-controller-manager/0.log" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.522403 4792 generic.go:334] "Generic (PLEG): container finished" podID="b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" containerID="2b056efa916f5f8da6f4a53b66925e4945d5643bfcbea32b47cb9f0aea8ea41d" exitCode=2 Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.522481 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" event={"ID":"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d","Type":"ContainerDied","Data":"2b056efa916f5f8da6f4a53b66925e4945d5643bfcbea32b47cb9f0aea8ea41d"} Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.524180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" event={"ID":"356468d1-7817-4566-bb80-ca21f4b9ff24","Type":"ContainerStarted","Data":"bbf99bcf3f1a102ffda62028210cde474da248eaba75dd048f3b8d64a3411cd2"} Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.526657 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" event={"ID":"a2d1a8e0-5e00-4f64-b171-213fa622a25d","Type":"ContainerStarted","Data":"6df8ba1fc37fc5afe553871543f162358f589b1a7aadae6f32c8669632408606"} Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.527638 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-9qk59 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.527697 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9qk59" podUID="b749c00a-6a69-4782-8018-7e6f759c9575" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.527738 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.527785 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.528313 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.528524 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.534464 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.534538 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.534471 4792 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2t8f8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.534660 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" podUID="967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.552741 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" podStartSLOduration=176.55271589 podStartE2EDuration="2m56.55271589s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.550777018 +0000 UTC m=+232.696834558" watchObservedRunningTime="2026-03-19 16:44:29.55271589 +0000 UTC m=+232.698773430" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.562111 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.562179 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.566010 4792 patch_prober.go:28] interesting pod/console-f9d7485db-q29n4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.566068 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-q29n4" podUID="d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.570974 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podStartSLOduration=176.57095809 podStartE2EDuration="2m56.57095809s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.569089469 +0000 UTC m=+232.715147029" watchObservedRunningTime="2026-03-19 16:44:29.57095809 +0000 UTC m=+232.717015630" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.583177 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l4f68" podStartSLOduration=176.583158354 podStartE2EDuration="2m56.583158354s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.582122236 +0000 UTC m=+232.728179786" watchObservedRunningTime="2026-03-19 16:44:29.583158354 +0000 UTC m=+232.729215894" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.601425 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9v4gc" podStartSLOduration=176.601408114 podStartE2EDuration="2m56.601408114s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.599267206 +0000 UTC m=+232.745324746" watchObservedRunningTime="2026-03-19 16:44:29.601408114 +0000 UTC m=+232.747465654" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.614189 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.614244 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.114219315 +0000 UTC m=+233.260276855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.617484 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.625092 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ms27t" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.633337 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.633913 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.133887273 +0000 UTC m=+233.279945003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.653779 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t6b87" podStartSLOduration=176.653724617 podStartE2EDuration="2m56.653724617s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.646648443 +0000 UTC m=+232.792705983" watchObservedRunningTime="2026-03-19 16:44:29.653724617 +0000 UTC m=+232.799782157" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.654144 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" podStartSLOduration=176.654136448 podStartE2EDuration="2m56.654136448s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.623803607 +0000 UTC m=+232.769861147" watchObservedRunningTime="2026-03-19 16:44:29.654136448 +0000 UTC m=+232.800193988" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.672409 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h9f57" podStartSLOduration=176.672391838 podStartE2EDuration="2m56.672391838s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.670680652 +0000 UTC m=+232.816738192" watchObservedRunningTime="2026-03-19 16:44:29.672391838 +0000 UTC m=+232.818449378" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.693204 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sktld" podStartSLOduration=176.693186148 podStartE2EDuration="2m56.693186148s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.692631372 +0000 UTC m=+232.838688922" watchObservedRunningTime="2026-03-19 16:44:29.693186148 +0000 UTC m=+232.839243688" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.707469 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:29 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:29 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:29 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.708187 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.723550 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" podStartSLOduration=176.723531908 podStartE2EDuration="2m56.723531908s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.719378194 +0000 UTC m=+232.865435734" watchObservedRunningTime="2026-03-19 16:44:29.723531908 +0000 UTC m=+232.869589448" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.734502 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.736536 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t84hr" podStartSLOduration=176.736509504 podStartE2EDuration="2m56.736509504s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:29.733870662 +0000 UTC m=+232.879928202" watchObservedRunningTime="2026-03-19 16:44:29.736509504 +0000 UTC m=+232.882567044" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.738461 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.238416986 +0000 UTC m=+233.384474526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.760032 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5696f5a2-e040-4aa0-818d-a390c8128171" path="/var/lib/kubelet/pods/5696f5a2-e040-4aa0-818d-a390c8128171/volumes" Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.839957 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.840678 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.340631625 +0000 UTC m=+233.486689355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.941035 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.941344 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.441294872 +0000 UTC m=+233.587352412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:29 crc kubenswrapper[4792]: I0319 16:44:29.941498 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:29 crc kubenswrapper[4792]: E0319 16:44:29.942004 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.441993581 +0000 UTC m=+233.588051331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.042250 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.042423 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.542398511 +0000 UTC m=+233.688456051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.042925 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.043278 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.543263025 +0000 UTC m=+233.689320565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.119294 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6576b87f9c-lbql2_b9de26e1-dfe8-43dc-bc11-568c2d5dee2d/route-controller-manager/0.log" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.119372 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.144110 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.144256 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.64423452 +0000 UTC m=+233.790292060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.144310 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.144632 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.64462059 +0000 UTC m=+233.790678130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.245441 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.245540 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shk9j\" (UniqueName: \"kubernetes.io/projected/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-kube-api-access-shk9j\") pod \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.245573 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-config\") pod \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.245590 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.745569024 +0000 UTC m=+233.891626564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.245679 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-client-ca\") pod \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.245746 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-serving-cert\") pod \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\" (UID: \"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d\") " Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.245922 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.246351 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.746333336 +0000 UTC m=+233.892390886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.247321 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-config" (OuterVolumeSpecName: "config") pod "b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" (UID: "b9de26e1-dfe8-43dc-bc11-568c2d5dee2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.247349 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" (UID: "b9de26e1-dfe8-43dc-bc11-568c2d5dee2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.252631 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" (UID: "b9de26e1-dfe8-43dc-bc11-568c2d5dee2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.252832 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-kube-api-access-shk9j" (OuterVolumeSpecName: "kube-api-access-shk9j") pod "b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" (UID: "b9de26e1-dfe8-43dc-bc11-568c2d5dee2d"). InnerVolumeSpecName "kube-api-access-shk9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.335712 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50180: no serving certificate available for the kubelet" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.347096 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.347259 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.847230649 +0000 UTC m=+233.993288189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.347414 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.347493 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.347512 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.347522 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shk9j\" (UniqueName: \"kubernetes.io/projected/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-kube-api-access-shk9j\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.347531 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.347757 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.847748303 +0000 UTC m=+233.993805843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.422943 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50182: no serving certificate available for the kubelet" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.439175 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50192: no serving certificate available for the kubelet" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.449115 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.449298 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.949213922 +0000 UTC m=+234.095271472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.449653 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.450059 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:30.950039574 +0000 UTC m=+234.096097114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.456313 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50196: no serving certificate available for the kubelet" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.478987 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50212: no serving certificate available for the kubelet" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.533005 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" event={"ID":"05d420b5-e7fb-4a41-b088-c7a8cbf91b5f","Type":"ContainerStarted","Data":"23292632d30663d7e51de0d9800c92610f1550c8f36671acfcaad8411c2a7499"} Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.535512 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" event={"ID":"e5fea090-6dce-44d1-b5bc-9dabbfa00286","Type":"ContainerStarted","Data":"c24f9afefdb7a65368dbee7fd554440baa75a9f770f26a5288d8cce72f2c8478"} Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.538900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzdmn" event={"ID":"6adf0a51-8344-4d3e-906b-423278cf06b7","Type":"ContainerStarted","Data":"0e7174a8fd791340214ffbae127f12ac286103e426b5d2256f3cc941ea818770"} Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.539006 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.541130 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" event={"ID":"79e6ba0f-7a19-4676-af04-8cbcc56ab4fa","Type":"ContainerStarted","Data":"94b77679830707eeec7647750427ec1164c035dff621b59cc4e2bcf0e953e674"} Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.548485 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrb5n" podStartSLOduration=177.54847037 podStartE2EDuration="2m57.54847037s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:30.547529794 +0000 UTC m=+233.693587324" watchObservedRunningTime="2026-03-19 16:44:30.54847037 +0000 UTC m=+233.694527910" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.550444 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.550558 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.050544326 +0000 UTC m=+234.196601866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.550625 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.551134 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.051116142 +0000 UTC m=+234.197173682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.551484 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6576b87f9c-lbql2_b9de26e1-dfe8-43dc-bc11-568c2d5dee2d/route-controller-manager/0.log" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.551628 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.553551 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2" event={"ID":"b9de26e1-dfe8-43dc-bc11-568c2d5dee2d","Type":"ContainerDied","Data":"1f961a0259857bf4848e38d7dae10a403d348cc1a835b16113b2832d72218ca3"} Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.553588 4792 scope.go:117] "RemoveContainer" containerID="2b056efa916f5f8da6f4a53b66925e4945d5643bfcbea32b47cb9f0aea8ea41d" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.561178 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" event={"ID":"a2d1a8e0-5e00-4f64-b171-213fa622a25d","Type":"ContainerStarted","Data":"e7b6555919034a57ec04c910b109f29abd54d494255aebd52a7550c7607655e8"} Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.561679 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.566459 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" event={"ID":"a446d1fe-6ebb-425a-8b70-b3225da28873","Type":"ContainerStarted","Data":"8f0207018e0ce6c6ccafd1f48925fd1698c0a4f44e9773c61787a0a835e0f291"} Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.567206 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.573731 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2ccxc" podStartSLOduration=177.573713591 podStartE2EDuration="2m57.573713591s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:30.568706414 +0000 UTC m=+233.714763954" watchObservedRunningTime="2026-03-19 16:44:30.573713591 +0000 UTC m=+233.719771131" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.574459 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" event={"ID":"3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308","Type":"ContainerStarted","Data":"2643d9df39b8e4ad0f87853bf56feec7e43e48b0c39f5e484f18890ad51f793e"} Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.575064 4792 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2t8f8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.575105 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" podUID="967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.575194 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.575233 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.575288 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.587439 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dzdmn" podStartSLOduration=12.587420966 podStartE2EDuration="12.587420966s" podCreationTimestamp="2026-03-19 16:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:30.58643345 +0000 UTC m=+233.732490990" watchObservedRunningTime="2026-03-19 16:44:30.587420966 +0000 UTC m=+233.733478496" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.593533 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50224: no serving certificate available for the kubelet" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.619541 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dk4pz" podStartSLOduration=177.619525525 podStartE2EDuration="2m57.619525525s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:30.603513817 +0000 UTC m=+233.749571357" watchObservedRunningTime="2026-03-19 16:44:30.619525525 +0000 UTC m=+233.765583055" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.638950 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podStartSLOduration=177.638931367 podStartE2EDuration="2m57.638931367s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:30.621746896 +0000 UTC m=+233.767804436" watchObservedRunningTime="2026-03-19 16:44:30.638931367 +0000 UTC m=+233.784988907" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.640816 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-pz5zs" podStartSLOduration=177.640796717 podStartE2EDuration="2m57.640796717s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:30.640041477 +0000 UTC m=+233.786099017" watchObservedRunningTime="2026-03-19 16:44:30.640796717 +0000 UTC m=+233.786854257" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.652300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.653425 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.153406953 +0000 UTC m=+234.299464493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.674600 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" podStartSLOduration=177.674584594 podStartE2EDuration="2m57.674584594s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:30.662073661 +0000 UTC m=+233.808131201" watchObservedRunningTime="2026-03-19 16:44:30.674584594 +0000 UTC m=+233.820642134" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.675975 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2"] Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.679921 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbql2"] Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.699012 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" podStartSLOduration=4.698990122 podStartE2EDuration="4.698990122s" podCreationTimestamp="2026-03-19 16:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:30.692223826 +0000 UTC m=+233.838281366" watchObservedRunningTime="2026-03-19 16:44:30.698990122 +0000 UTC m=+233.845047662" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.700103 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.704704 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:30 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:30 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:30 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.704774 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.755338 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.756487 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.256472956 +0000 UTC m=+234.402530496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.779142 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50238: no serving certificate available for the kubelet" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.856870 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.857069 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.35704 +0000 UTC m=+234.503097540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.857140 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.857463 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.357455321 +0000 UTC m=+234.503512931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.861068 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx"] Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.861298 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" containerName="route-controller-manager" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.861316 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" containerName="route-controller-manager" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.861409 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" containerName="route-controller-manager" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.861803 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.866517 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.866902 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.866929 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.867098 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.867103 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.870033 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.877530 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx"] Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.958094 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.958367 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-config\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.958446 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-client-ca\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.958477 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpj2\" (UniqueName: \"kubernetes.io/projected/5da2b6a6-e385-40b1-9a80-4ec5c268d043-kube-api-access-4wpj2\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:30 crc kubenswrapper[4792]: I0319 16:44:30.958502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da2b6a6-e385-40b1-9a80-4ec5c268d043-serving-cert\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:30 crc kubenswrapper[4792]: E0319 16:44:30.958736 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.458716515 +0000 UTC m=+234.604774045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.060169 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpj2\" (UniqueName: \"kubernetes.io/projected/5da2b6a6-e385-40b1-9a80-4ec5c268d043-kube-api-access-4wpj2\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.060224 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da2b6a6-e385-40b1-9a80-4ec5c268d043-serving-cert\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.060271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.060324 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-config\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.060375 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-client-ca\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.061372 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-client-ca\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.061689 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.561673734 +0000 UTC m=+234.707731274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.062789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-config\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.082870 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da2b6a6-e385-40b1-9a80-4ec5c268d043-serving-cert\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.102876 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wpj2\" (UniqueName: \"kubernetes.io/projected/5da2b6a6-e385-40b1-9a80-4ec5c268d043-kube-api-access-4wpj2\") pod \"route-controller-manager-8bd46f65c-pgmlx\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.162059 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50250: no serving certificate available for the kubelet" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.162213 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.162493 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.662462684 +0000 UTC m=+234.808520224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.162851 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.163132 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.663124372 +0000 UTC m=+234.809181912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.191572 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.193400 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.263783 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.265417 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.765395303 +0000 UTC m=+234.911452853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.365960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.366261 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.866247535 +0000 UTC m=+235.012305075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.379415 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-9qk59 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.379458 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9qk59" podUID="b749c00a-6a69-4782-8018-7e6f759c9575" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.379512 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-9qk59 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.379557 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9qk59" podUID="b749c00a-6a69-4782-8018-7e6f759c9575" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.389408 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.418062 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5jwjp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.418122 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.418513 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5jwjp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.418534 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.418602 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5jwjp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.418618 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.468450 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.468718 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:31.96868414 +0000 UTC m=+235.114741700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.491165 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.571583 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.571920 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.071907556 +0000 UTC m=+235.217965096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.599476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" event={"ID":"03c93f52-3a7f-4fbc-921e-79ad74db2d4e","Type":"ContainerStarted","Data":"e740591659f957577660dfd1408112a477322aab82f7c21b292663c1ac5310cd"} Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.621351 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.621629 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.634857 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" podStartSLOduration=178.634824229 podStartE2EDuration="2m58.634824229s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:31.63300552 +0000 UTC m=+234.779063080" watchObservedRunningTime="2026-03-19 16:44:31.634824229 +0000 UTC m=+234.780881769" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.639976 4792 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-9clzb container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.27:8443/livez\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.640048 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" podUID="3ce10dd0-a4a6-4d27-a9d9-2ba8bbdf8308" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.27:8443/livez\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.672614 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.673591 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.173577361 +0000 UTC m=+235.319634901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.703533 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx"] Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.705510 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:31 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:31 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:31 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.705716 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.749949 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9de26e1-dfe8-43dc-bc11-568c2d5dee2d" path="/var/lib/kubelet/pods/b9de26e1-dfe8-43dc-bc11-568c2d5dee2d/volumes" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.774385 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.774894 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.274877004 +0000 UTC m=+235.420934544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.840660 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50252: no serving certificate available for the kubelet" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.875632 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.875993 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.375976304 +0000 UTC m=+235.522033844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.955987 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 16:44:31 crc kubenswrapper[4792]: I0319 16:44:31.977865 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:31 crc kubenswrapper[4792]: E0319 16:44:31.978206 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.478195113 +0000 UTC m=+235.624252653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.078945 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.079416 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.579402614 +0000 UTC m=+235.725460154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.143976 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.181682 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.182371 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.682353973 +0000 UTC m=+235.828411513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.282827 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.282976 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.782951068 +0000 UTC m=+235.929008598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.283100 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.283438 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.783426692 +0000 UTC m=+235.929484232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.322298 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.384919 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.88490125 +0000 UTC m=+236.030958790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.384963 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.386267 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.387736 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.887727717 +0000 UTC m=+236.033785247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.487719 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.487918 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.987887021 +0000 UTC m=+236.133944561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.487981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.488239 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:32.98823114 +0000 UTC m=+236.134288670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.589065 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.589420 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.08940594 +0000 UTC m=+236.235463480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.606896 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" event={"ID":"5da2b6a6-e385-40b1-9a80-4ec5c268d043","Type":"ContainerStarted","Data":"c4d6ddd6e5d7246f982a53c4a512769531e399e46a8a903f82e89d7a4ee0e3a7"} Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.606955 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" event={"ID":"5da2b6a6-e385-40b1-9a80-4ec5c268d043","Type":"ContainerStarted","Data":"d2135111696e22cff0ef4f490cf6033dd3667a394dce26977d2399620ae9271c"} Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.690134 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.691858 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.191824645 +0000 UTC m=+236.337882185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.702235 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:32 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:32 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:32 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.702284 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.791424 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.791720 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.291705571 +0000 UTC m=+236.437763111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.892580 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.892936 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.392924033 +0000 UTC m=+236.538981573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.988411 4792 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 19 16:44:32 crc kubenswrapper[4792]: I0319 16:44:32.993484 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:32 crc kubenswrapper[4792]: E0319 16:44:32.993823 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.493807995 +0000 UTC m=+236.639865535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.061334 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" podStartSLOduration=7.061313874 podStartE2EDuration="7.061313874s" podCreationTimestamp="2026-03-19 16:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:32.630929458 +0000 UTC m=+235.776986998" watchObservedRunningTime="2026-03-19 16:44:33.061313874 +0000 UTC m=+236.207371414" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.063424 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4724c"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.064641 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.087195 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.095206 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjtvj\" (UniqueName: \"kubernetes.io/projected/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-kube-api-access-wjtvj\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.095364 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-utilities\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.095420 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-catalog-content\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.095452 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.096006 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.595987583 +0000 UTC m=+236.742045123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.102622 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4724c"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.162639 4792 ???:1] "http: TLS handshake error from 192.168.126.11:50262: no serving certificate available for the kubelet" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.196705 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.196927 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.696900077 +0000 UTC m=+236.842957617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.197075 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjtvj\" (UniqueName: \"kubernetes.io/projected/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-kube-api-access-wjtvj\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.197121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-utilities\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.197201 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-catalog-content\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.197236 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.197558 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.697552145 +0000 UTC m=+236.843609685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.197802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-utilities\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.198554 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-catalog-content\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.216405 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjtvj\" (UniqueName: \"kubernetes.io/projected/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-kube-api-access-wjtvj\") pod \"certified-operators-4724c\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.268110 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qwbvn"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.269415 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.272640 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.290187 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qwbvn"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.292142 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.298755 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.299120 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjzc\" (UniqueName: \"kubernetes.io/projected/efcab6c7-88f0-4335-a972-bdd8933433dc-kube-api-access-bwjzc\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.299201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-catalog-content\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.299283 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-utilities\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.299413 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.799376263 +0000 UTC m=+236.945433803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.303252 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.317345 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.318311 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.322511 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.322619 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.328040 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.400727 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.401054 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-utilities\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.401091 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.401139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjzc\" (UniqueName: \"kubernetes.io/projected/efcab6c7-88f0-4335-a972-bdd8933433dc-kube-api-access-bwjzc\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.401157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-catalog-content\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.401192 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.401441 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:33.901425848 +0000 UTC m=+237.047483388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.401543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-utilities\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.403474 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-catalog-content\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.418011 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.419647 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjzc\" (UniqueName: \"kubernetes.io/projected/efcab6c7-88f0-4335-a972-bdd8933433dc-kube-api-access-bwjzc\") pod \"community-operators-qwbvn\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.459979 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzjbz"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.461204 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.469509 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzjbz"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.502519 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.502738 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:34.002708031 +0000 UTC m=+237.148765571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.502811 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.502946 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.502986 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-catalog-content\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.503006 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-utilities\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.503026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-778j2\" (UniqueName: \"kubernetes.io/projected/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-kube-api-access-778j2\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.503060 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.503374 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:34.0033676 +0000 UTC m=+237.149425140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.503649 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.521776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.582944 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.604667 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.604934 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-catalog-content\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.604956 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-utilities\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.604972 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-778j2\" (UniqueName: \"kubernetes.io/projected/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-kube-api-access-778j2\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.605318 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:34.105303381 +0000 UTC m=+237.251360921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.605659 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-catalog-content\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.605926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-utilities\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.626897 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-778j2\" (UniqueName: \"kubernetes.io/projected/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-kube-api-access-778j2\") pod \"certified-operators-mzjbz\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.632711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" event={"ID":"03c93f52-3a7f-4fbc-921e-79ad74db2d4e","Type":"ContainerStarted","Data":"4663e689946e0a410fc02a19d684583f221e95ef79200c6d6cd65a99d60bbc60"} Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.632743 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" event={"ID":"03c93f52-3a7f-4fbc-921e-79ad74db2d4e","Type":"ContainerStarted","Data":"22169b77036a4e27bfcc4de92e88aff1a0f62d8eb9c3fddbfcd47583ad9acb6b"} Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.632757 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" event={"ID":"03c93f52-3a7f-4fbc-921e-79ad74db2d4e","Type":"ContainerStarted","Data":"cebc9068ded9edfe56bd4b910a5b13a85212edfbd72c21e32999ceffac790d3d"} Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.638398 4792 generic.go:334] "Generic (PLEG): container finished" podID="ecfa468d-32df-43ac-8884-40aad47fd099" containerID="9ad04bd5c1c44315e2e6a7226b93f4435d7e189f7f6fb7df9f8221bac04bd8c7" exitCode=0 Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.638444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" event={"ID":"ecfa468d-32df-43ac-8884-40aad47fd099","Type":"ContainerDied","Data":"9ad04bd5c1c44315e2e6a7226b93f4435d7e189f7f6fb7df9f8221bac04bd8c7"} Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.639436 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.640488 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.654375 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.657353 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fd9rl"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.658443 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.658828 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" podStartSLOduration=15.658809366 podStartE2EDuration="15.658809366s" podCreationTimestamp="2026-03-19 16:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:33.658461857 +0000 UTC m=+236.804519407" watchObservedRunningTime="2026-03-19 16:44:33.658809366 +0000 UTC m=+236.804866906" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.672755 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fd9rl"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.704516 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:33 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:33 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:33 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.704606 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.709916 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.710024 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj452\" (UniqueName: \"kubernetes.io/projected/39daf6b3-68ce-429a-b454-1a07c6706a9e-kube-api-access-kj452\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.710061 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-catalog-content\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.710168 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-utilities\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.711068 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:34.211055857 +0000 UTC m=+237.357113397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.727105 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4724c"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.808253 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.811413 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.811645 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj452\" (UniqueName: \"kubernetes.io/projected/39daf6b3-68ce-429a-b454-1a07c6706a9e-kube-api-access-kj452\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.811676 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-catalog-content\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.811762 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-utilities\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.812336 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-utilities\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.812426 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-19 16:44:34.312406513 +0000 UTC m=+237.458464053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.814018 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-catalog-content\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.835518 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj452\" (UniqueName: \"kubernetes.io/projected/39daf6b3-68ce-429a-b454-1a07c6706a9e-kube-api-access-kj452\") pod \"community-operators-fd9rl\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.856964 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.857758 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.867895 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.868132 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.877585 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.912749 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.912827 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.912893 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:33 crc kubenswrapper[4792]: E0319 16:44:33.913262 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-19 16:44:34.413247354 +0000 UTC m=+237.559304894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fthfn" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.978638 4792 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-19T16:44:32.988450748Z","Handler":null,"Name":""} Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.982327 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.983524 4792 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.983550 4792 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 19 16:44:33 crc kubenswrapper[4792]: I0319 16:44:33.993948 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 19 16:44:34 crc kubenswrapper[4792]: W0319 16:44:34.007074 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode8c02b85_1a6e_4c84_9ea4_48f063bd9ed5.slice/crio-7fe1d17d778adcd4c2cdd98236843f3795440ac65462cd4b6e588be02220c5b8 WatchSource:0}: Error finding container 7fe1d17d778adcd4c2cdd98236843f3795440ac65462cd4b6e588be02220c5b8: Status 404 returned error can't find the container with id 7fe1d17d778adcd4c2cdd98236843f3795440ac65462cd4b6e588be02220c5b8 Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.013996 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.014408 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.014505 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.014613 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.019829 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.033584 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.116574 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.122608 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.122644 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.148271 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qwbvn"] Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.152393 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fthfn\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.163500 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.195163 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fd9rl"] Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.220631 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.285381 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzjbz"] Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.651058 4792 generic.go:334] "Generic (PLEG): container finished" podID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerID="6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676" exitCode=0 Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.651245 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4724c" event={"ID":"f04d1453-ed31-4e0f-a10c-89ebac7f8f51","Type":"ContainerDied","Data":"6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676"} Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.651386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4724c" event={"ID":"f04d1453-ed31-4e0f-a10c-89ebac7f8f51","Type":"ContainerStarted","Data":"62ff0e797d1d9f6c96535d715b5cd94194c02f15a2e3845a59523179fdc83c45"} Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.655273 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5","Type":"ContainerStarted","Data":"662f92872a842bf03006d5ca8dff8cf42da83528a3a4ba6299b541d3380e6e96"} Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.655325 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5","Type":"ContainerStarted","Data":"7fe1d17d778adcd4c2cdd98236843f3795440ac65462cd4b6e588be02220c5b8"} Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.691898 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.691881737 podStartE2EDuration="1.691881737s" podCreationTimestamp="2026-03-19 16:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:34.691103315 +0000 UTC m=+237.837160845" watchObservedRunningTime="2026-03-19 16:44:34.691881737 +0000 UTC m=+237.837939287" Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.713904 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:34 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:34 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:34 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:34 crc kubenswrapper[4792]: I0319 16:44:34.713972 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.260363 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pth"] Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.262461 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.265979 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.267274 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pth"] Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.340479 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-catalog-content\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.340525 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-utilities\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.340704 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgzrl\" (UniqueName: \"kubernetes.io/projected/7b49f828-0dec-4a3f-9247-7ef8b8882b52-kube-api-access-vgzrl\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.448058 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgzrl\" (UniqueName: \"kubernetes.io/projected/7b49f828-0dec-4a3f-9247-7ef8b8882b52-kube-api-access-vgzrl\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.448448 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-catalog-content\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.448469 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-utilities\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.449029 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-catalog-content\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.449423 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-utilities\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.468788 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgzrl\" (UniqueName: \"kubernetes.io/projected/7b49f828-0dec-4a3f-9247-7ef8b8882b52-kube-api-access-vgzrl\") pod \"redhat-marketplace-n5pth\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.585408 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.658327 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r4272"] Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.659286 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.676333 4792 generic.go:334] "Generic (PLEG): container finished" podID="e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5" containerID="662f92872a842bf03006d5ca8dff8cf42da83528a3a4ba6299b541d3380e6e96" exitCode=0 Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.676465 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5","Type":"ContainerDied","Data":"662f92872a842bf03006d5ca8dff8cf42da83528a3a4ba6299b541d3380e6e96"} Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.676502 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4272"] Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.703328 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:35 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:35 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:35 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.703691 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.752589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-catalog-content\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.752964 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7j6\" (UniqueName: \"kubernetes.io/projected/2e59df38-8404-4664-96d7-481e34988bee-kube-api-access-xh7j6\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.753050 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-utilities\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.754956 4792 ???:1] "http: TLS handshake error from 192.168.126.11:56424: no serving certificate available for the kubelet" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.759416 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.854205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7j6\" (UniqueName: \"kubernetes.io/projected/2e59df38-8404-4664-96d7-481e34988bee-kube-api-access-xh7j6\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.854259 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-utilities\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.854325 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-catalog-content\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.854773 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-catalog-content\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.855061 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-utilities\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.871324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7j6\" (UniqueName: \"kubernetes.io/projected/2e59df38-8404-4664-96d7-481e34988bee-kube-api-access-xh7j6\") pod \"redhat-marketplace-r4272\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:35 crc kubenswrapper[4792]: I0319 16:44:35.975616 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.257665 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-25ctb"] Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.260777 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.264107 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.271441 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25ctb"] Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.360909 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-catalog-content\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.360957 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-utilities\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.361098 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwssq\" (UniqueName: \"kubernetes.io/projected/de7d0c67-0339-42c9-8330-f80dfd39c860-kube-api-access-pwssq\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.462501 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwssq\" (UniqueName: \"kubernetes.io/projected/de7d0c67-0339-42c9-8330-f80dfd39c860-kube-api-access-pwssq\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.462597 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-catalog-content\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.462615 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-utilities\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.463057 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-utilities\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.463548 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-catalog-content\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.481676 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwssq\" (UniqueName: \"kubernetes.io/projected/de7d0c67-0339-42c9-8330-f80dfd39c860-kube-api-access-pwssq\") pod \"redhat-operators-25ctb\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.580605 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.627920 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.633082 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9clzb" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.661771 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dhxns"] Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.667182 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.669689 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhxns"] Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.701726 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:36 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:36 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:36 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.702010 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.767496 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-utilities\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.767885 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-catalog-content\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.768077 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvhq\" (UniqueName: \"kubernetes.io/projected/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-kube-api-access-dqvhq\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.869586 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqvhq\" (UniqueName: \"kubernetes.io/projected/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-kube-api-access-dqvhq\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.869670 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-utilities\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.869721 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-catalog-content\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.870269 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-catalog-content\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.870891 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-utilities\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.892273 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqvhq\" (UniqueName: \"kubernetes.io/projected/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-kube-api-access-dqvhq\") pod \"redhat-operators-dhxns\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:36 crc kubenswrapper[4792]: I0319 16:44:36.990813 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:44:37 crc kubenswrapper[4792]: W0319 16:44:37.533884 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca871067_aaf0_4f1a_bc9e_29dabe8f1bb2.slice/crio-409a2c3aa92cd2df833a54864981bd7ffa6cb3bfd80c2f9b8aa688c1c3611844 WatchSource:0}: Error finding container 409a2c3aa92cd2df833a54864981bd7ffa6cb3bfd80c2f9b8aa688c1c3611844: Status 404 returned error can't find the container with id 409a2c3aa92cd2df833a54864981bd7ffa6cb3bfd80c2f9b8aa688c1c3611844 Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.576324 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.584491 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.684274 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68729\" (UniqueName: \"kubernetes.io/projected/ecfa468d-32df-43ac-8884-40aad47fd099-kube-api-access-68729\") pod \"ecfa468d-32df-43ac-8884-40aad47fd099\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.684735 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecfa468d-32df-43ac-8884-40aad47fd099-config-volume\") pod \"ecfa468d-32df-43ac-8884-40aad47fd099\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.684863 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecfa468d-32df-43ac-8884-40aad47fd099-secret-volume\") pod \"ecfa468d-32df-43ac-8884-40aad47fd099\" (UID: \"ecfa468d-32df-43ac-8884-40aad47fd099\") " Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.684918 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kubelet-dir\") pod \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\" (UID: \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\") " Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.685135 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kube-api-access\") pod \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\" (UID: \"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5\") " Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.686406 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5" (UID: "e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.687019 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecfa468d-32df-43ac-8884-40aad47fd099-config-volume" (OuterVolumeSpecName: "config-volume") pod "ecfa468d-32df-43ac-8884-40aad47fd099" (UID: "ecfa468d-32df-43ac-8884-40aad47fd099"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.691771 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecfa468d-32df-43ac-8884-40aad47fd099-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ecfa468d-32df-43ac-8884-40aad47fd099" (UID: "ecfa468d-32df-43ac-8884-40aad47fd099"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.695198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5" (UID: "e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.695882 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecfa468d-32df-43ac-8884-40aad47fd099-kube-api-access-68729" (OuterVolumeSpecName: "kube-api-access-68729") pod "ecfa468d-32df-43ac-8884-40aad47fd099" (UID: "ecfa468d-32df-43ac-8884-40aad47fd099"). InnerVolumeSpecName "kube-api-access-68729". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.696020 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecfa468d-32df-43ac-8884-40aad47fd099-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.696162 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecfa468d-32df-43ac-8884-40aad47fd099-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.696275 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.696450 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.700717 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd9rl" event={"ID":"39daf6b3-68ce-429a-b454-1a07c6706a9e","Type":"ContainerStarted","Data":"04285f2e5a1d73a0d0ed0606044d2a1bf02b469bbc747a90543ba80ccf758a3c"} Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.701723 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:37 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:37 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:37 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.701997 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.703764 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5","Type":"ContainerDied","Data":"7fe1d17d778adcd4c2cdd98236843f3795440ac65462cd4b6e588be02220c5b8"} Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.703793 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe1d17d778adcd4c2cdd98236843f3795440ac65462cd4b6e588be02220c5b8" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.703922 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.706997 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" event={"ID":"ecfa468d-32df-43ac-8884-40aad47fd099","Type":"ContainerDied","Data":"854a7152bd80af7f301447247dbf0abb79bce0a0a04fe40ac7fbe8c1632490d9"} Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.707057 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="854a7152bd80af7f301447247dbf0abb79bce0a0a04fe40ac7fbe8c1632490d9" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.707228 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl" Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.707808 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzjbz" event={"ID":"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2","Type":"ContainerStarted","Data":"409a2c3aa92cd2df833a54864981bd7ffa6cb3bfd80c2f9b8aa688c1c3611844"} Mar 19 16:44:37 crc kubenswrapper[4792]: I0319 16:44:37.797661 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68729\" (UniqueName: \"kubernetes.io/projected/ecfa468d-32df-43ac-8884-40aad47fd099-kube-api-access-68729\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:38 crc kubenswrapper[4792]: I0319 16:44:38.702563 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:38 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:38 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:38 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:38 crc kubenswrapper[4792]: I0319 16:44:38.703049 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:38 crc kubenswrapper[4792]: I0319 16:44:38.714398 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwbvn" event={"ID":"efcab6c7-88f0-4335-a972-bdd8933433dc","Type":"ContainerStarted","Data":"fbbeb8b2949a9316867c2869e8085395d0a7f0557e42c0bbcdecc6bab0f670da"} Mar 19 16:44:38 crc kubenswrapper[4792]: E0319 16:44:38.838908 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfa468d_32df_43ac_8884_40aad47fd099.slice\": RecentStats: unable to find data in memory cache]" Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.311511 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pth"] Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.533473 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dzdmn" Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.566477 4792 patch_prober.go:28] interesting pod/console-f9d7485db-q29n4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.566532 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-q29n4" podUID="d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.704248 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:39 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:39 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:39 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.704672 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.725947 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4272"] Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.734286 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhxns"] Mar 19 16:44:39 crc kubenswrapper[4792]: W0319 16:44:39.743116 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e59df38_8404_4664_96d7_481e34988bee.slice/crio-0d1944fcb5fdfcc947a04eae473eea98adc1708cbb74da6773244bfdb9f25c0b WatchSource:0}: Error finding container 0d1944fcb5fdfcc947a04eae473eea98adc1708cbb74da6773244bfdb9f25c0b: Status 404 returned error can't find the container with id 0d1944fcb5fdfcc947a04eae473eea98adc1708cbb74da6773244bfdb9f25c0b Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.745801 4792 generic.go:334] "Generic (PLEG): container finished" podID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerID="64f45b2d8c3b6184e917689f85df32d0ef01e41aa76e8474cfe55281eb50ea6c" exitCode=0 Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.748123 4792 generic.go:334] "Generic (PLEG): container finished" podID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerID="74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c" exitCode=0 Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.756255 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzjbz" event={"ID":"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2","Type":"ContainerDied","Data":"64f45b2d8c3b6184e917689f85df32d0ef01e41aa76e8474cfe55281eb50ea6c"} Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.756287 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwbvn" event={"ID":"efcab6c7-88f0-4335-a972-bdd8933433dc","Type":"ContainerDied","Data":"74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c"} Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.756369 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerID="88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d" exitCode=0 Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.756402 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pth" event={"ID":"7b49f828-0dec-4a3f-9247-7ef8b8882b52","Type":"ContainerDied","Data":"88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d"} Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.756415 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pth" event={"ID":"7b49f828-0dec-4a3f-9247-7ef8b8882b52","Type":"ContainerStarted","Data":"6d092c6c7d989b6b100f88c9bb43f51865b5189fa21ce5f81aceec6760b3f56d"} Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.762997 4792 generic.go:334] "Generic (PLEG): container finished" podID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerID="22fff4b51c89d56ee537adfe1c80d0b26f6656e28e241a522e908e0eeb940037" exitCode=0 Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.763190 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd9rl" event={"ID":"39daf6b3-68ce-429a-b454-1a07c6706a9e","Type":"ContainerDied","Data":"22fff4b51c89d56ee537adfe1c80d0b26f6656e28e241a522e908e0eeb940037"} Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.775531 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" event={"ID":"422112a2-a6c2-4d09-aaeb-e4f9924ed96e","Type":"ContainerStarted","Data":"adad26fad0c9ffd603d8a730f225b94a613021e68583df3cd447d3f1170c9afe"} Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.851414 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25ctb"] Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.860943 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.861589 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fthfn"] Mar 19 16:44:39 crc kubenswrapper[4792]: W0319 16:44:39.875635 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8a18336_1f12_45bf_a9e0_0c3106a4abe1.slice/crio-c3315b7cadc34c174bc5f3b94ecd97439de91f694bbeb4462fd4d639dba172f2 WatchSource:0}: Error finding container c3315b7cadc34c174bc5f3b94ecd97439de91f694bbeb4462fd4d639dba172f2: Status 404 returned error can't find the container with id c3315b7cadc34c174bc5f3b94ecd97439de91f694bbeb4462fd4d639dba172f2 Mar 19 16:44:39 crc kubenswrapper[4792]: I0319 16:44:39.887697 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" podStartSLOduration=22.781894762 podStartE2EDuration="39.887680204s" podCreationTimestamp="2026-03-19 16:44:00 +0000 UTC" firstStartedPulling="2026-03-19 16:44:22.21655251 +0000 UTC m=+225.362610050" lastFinishedPulling="2026-03-19 16:44:39.322337952 +0000 UTC m=+242.468395492" observedRunningTime="2026-03-19 16:44:39.884374803 +0000 UTC m=+243.030432343" watchObservedRunningTime="2026-03-19 16:44:39.887680204 +0000 UTC m=+243.033737734" Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.196647 4792 csr.go:261] certificate signing request csr-rzg77 is approved, waiting to be issued Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.205237 4792 csr.go:257] certificate signing request csr-rzg77 is issued Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.726666 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:40 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:40 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:40 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.726994 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.807976 4792 generic.go:334] "Generic (PLEG): container finished" podID="2e59df38-8404-4664-96d7-481e34988bee" containerID="42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b" exitCode=0 Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.808059 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4272" event={"ID":"2e59df38-8404-4664-96d7-481e34988bee","Type":"ContainerDied","Data":"42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.808090 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4272" event={"ID":"2e59df38-8404-4664-96d7-481e34988bee","Type":"ContainerStarted","Data":"0d1944fcb5fdfcc947a04eae473eea98adc1708cbb74da6773244bfdb9f25c0b"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.811116 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e3a0113e-520f-4ba0-96bd-ac0a33b087ad","Type":"ContainerStarted","Data":"285f5a15135f12df6033d4e249e9c7f40ff290efd2433fbb08b246e7cb5ca06f"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.811157 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e3a0113e-520f-4ba0-96bd-ac0a33b087ad","Type":"ContainerStarted","Data":"9beb835253ab20d9fbe33dcf9c022545846e3f694fa9986695f44f6a1a86cb8a"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.814291 4792 generic.go:334] "Generic (PLEG): container finished" podID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerID="e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc" exitCode=0 Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.814355 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ctb" event={"ID":"de7d0c67-0339-42c9-8330-f80dfd39c860","Type":"ContainerDied","Data":"e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.814386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ctb" event={"ID":"de7d0c67-0339-42c9-8330-f80dfd39c860","Type":"ContainerStarted","Data":"bb99d12c3721d9d4a893146fbcde3f2f55767fc6f1fba97dd8eb4d25ef9fe898"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.817156 4792 generic.go:334] "Generic (PLEG): container finished" podID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerID="42e34f7d42bedf62dc4e009080f8131952f4defcdd16073100bcf082dee371dd" exitCode=0 Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.817192 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhxns" event={"ID":"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb","Type":"ContainerDied","Data":"42e34f7d42bedf62dc4e009080f8131952f4defcdd16073100bcf082dee371dd"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.817208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhxns" event={"ID":"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb","Type":"ContainerStarted","Data":"6602a973e131b13b73f020836b0afc0cd93c3da60a393eca3db4de2ca726f55a"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.821306 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" event={"ID":"d8a18336-1f12-45bf-a9e0-0c3106a4abe1","Type":"ContainerStarted","Data":"f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.821333 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" event={"ID":"d8a18336-1f12-45bf-a9e0-0c3106a4abe1","Type":"ContainerStarted","Data":"c3315b7cadc34c174bc5f3b94ecd97439de91f694bbeb4462fd4d639dba172f2"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.821731 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.827714 4792 generic.go:334] "Generic (PLEG): container finished" podID="422112a2-a6c2-4d09-aaeb-e4f9924ed96e" containerID="adad26fad0c9ffd603d8a730f225b94a613021e68583df3cd447d3f1170c9afe" exitCode=0 Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.827779 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" event={"ID":"422112a2-a6c2-4d09-aaeb-e4f9924ed96e","Type":"ContainerDied","Data":"adad26fad0c9ffd603d8a730f225b94a613021e68583df3cd447d3f1170c9afe"} Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.875626 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=7.875607318 podStartE2EDuration="7.875607318s" podCreationTimestamp="2026-03-19 16:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:40.868254067 +0000 UTC m=+244.014311607" watchObservedRunningTime="2026-03-19 16:44:40.875607318 +0000 UTC m=+244.021664858" Mar 19 16:44:40 crc kubenswrapper[4792]: I0319 16:44:40.889936 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" podStartSLOduration=187.88991199 podStartE2EDuration="3m7.88991199s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:40.886006063 +0000 UTC m=+244.032063603" watchObservedRunningTime="2026-03-19 16:44:40.88991199 +0000 UTC m=+244.035969540" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.158897 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.160558 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.176435 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab985610-78ac-44cf-a2ee-9a4a52dc431f-metrics-certs\") pod \"network-metrics-daemon-n8pzj\" (UID: \"ab985610-78ac-44cf-a2ee-9a4a52dc431f\") " pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.207030 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-15 09:34:37.855615844 +0000 UTC Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.207065 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7240h49m56.648553609s for next certificate rotation Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.384137 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.392730 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9qk59" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.462191 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.470532 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n8pzj" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.702871 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:41 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:41 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:41 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.702930 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.737203 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n8pzj"] Mar 19 16:44:41 crc kubenswrapper[4792]: W0319 16:44:41.750638 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab985610_78ac_44cf_a2ee_9a4a52dc431f.slice/crio-879d8b5ebeb2ead7d5461bc080d05f5a389332bfb092117054c4420987be71ae WatchSource:0}: Error finding container 879d8b5ebeb2ead7d5461bc080d05f5a389332bfb092117054c4420987be71ae: Status 404 returned error can't find the container with id 879d8b5ebeb2ead7d5461bc080d05f5a389332bfb092117054c4420987be71ae Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.836587 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" event={"ID":"ab985610-78ac-44cf-a2ee-9a4a52dc431f","Type":"ContainerStarted","Data":"879d8b5ebeb2ead7d5461bc080d05f5a389332bfb092117054c4420987be71ae"} Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.839589 4792 generic.go:334] "Generic (PLEG): container finished" podID="e3a0113e-520f-4ba0-96bd-ac0a33b087ad" containerID="285f5a15135f12df6033d4e249e9c7f40ff290efd2433fbb08b246e7cb5ca06f" exitCode=0 Mar 19 16:44:41 crc kubenswrapper[4792]: I0319 16:44:41.839807 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e3a0113e-520f-4ba0-96bd-ac0a33b087ad","Type":"ContainerDied","Data":"285f5a15135f12df6033d4e249e9c7f40ff290efd2433fbb08b246e7cb5ca06f"} Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.122230 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.170325 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzd5w\" (UniqueName: \"kubernetes.io/projected/422112a2-a6c2-4d09-aaeb-e4f9924ed96e-kube-api-access-wzd5w\") pod \"422112a2-a6c2-4d09-aaeb-e4f9924ed96e\" (UID: \"422112a2-a6c2-4d09-aaeb-e4f9924ed96e\") " Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.176471 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422112a2-a6c2-4d09-aaeb-e4f9924ed96e-kube-api-access-wzd5w" (OuterVolumeSpecName: "kube-api-access-wzd5w") pod "422112a2-a6c2-4d09-aaeb-e4f9924ed96e" (UID: "422112a2-a6c2-4d09-aaeb-e4f9924ed96e"). InnerVolumeSpecName "kube-api-access-wzd5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.208191 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-02 22:07:27.863166457 +0000 UTC Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.208224 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6197h22m45.654944775s for next certificate rotation Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.272215 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzd5w\" (UniqueName: \"kubernetes.io/projected/422112a2-a6c2-4d09-aaeb-e4f9924ed96e-kube-api-access-wzd5w\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.702531 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:42 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:42 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:42 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.702795 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.855635 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" event={"ID":"ab985610-78ac-44cf-a2ee-9a4a52dc431f","Type":"ContainerStarted","Data":"1dd2274c7104fc0529496c732c051d34605552a299d5d9b9445729ba7baf8d53"} Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.859140 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.859971 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565644-gg5p9" event={"ID":"422112a2-a6c2-4d09-aaeb-e4f9924ed96e","Type":"ContainerDied","Data":"58845583b672b9d5cf3ec9375de02c5f0c857637d759a3d68036b5625117f0cb"} Mar 19 16:44:42 crc kubenswrapper[4792]: I0319 16:44:42.860037 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58845583b672b9d5cf3ec9375de02c5f0c857637d759a3d68036b5625117f0cb" Mar 19 16:44:43 crc kubenswrapper[4792]: I0319 16:44:43.702245 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:43 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:43 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:43 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:43 crc kubenswrapper[4792]: I0319 16:44:43.702621 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:43 crc kubenswrapper[4792]: I0319 16:44:43.865043 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n8pzj" event={"ID":"ab985610-78ac-44cf-a2ee-9a4a52dc431f","Type":"ContainerStarted","Data":"bc8907147b97c2a84854cface842fdb9b1a0a11f98238e6480d84052e982e044"} Mar 19 16:44:44 crc kubenswrapper[4792]: I0319 16:44:44.701083 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 16:44:44 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Mar 19 16:44:44 crc kubenswrapper[4792]: [+]process-running ok Mar 19 16:44:44 crc kubenswrapper[4792]: healthz check failed Mar 19 16:44:44 crc kubenswrapper[4792]: I0319 16:44:44.701145 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 16:44:45 crc kubenswrapper[4792]: I0319 16:44:45.435181 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n8pzj" podStartSLOduration=192.435054719 podStartE2EDuration="3m12.435054719s" podCreationTimestamp="2026-03-19 16:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:44:43.879786648 +0000 UTC m=+247.025844178" watchObservedRunningTime="2026-03-19 16:44:45.435054719 +0000 UTC m=+248.581112259" Mar 19 16:44:45 crc kubenswrapper[4792]: I0319 16:44:45.437201 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bb64b865b-wvmq6"] Mar 19 16:44:45 crc kubenswrapper[4792]: I0319 16:44:45.437526 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" podUID="a2d1a8e0-5e00-4f64-b171-213fa622a25d" containerName="controller-manager" containerID="cri-o://e7b6555919034a57ec04c910b109f29abd54d494255aebd52a7550c7607655e8" gracePeriod=30 Mar 19 16:44:45 crc kubenswrapper[4792]: I0319 16:44:45.467333 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx"] Mar 19 16:44:45 crc kubenswrapper[4792]: I0319 16:44:45.467537 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" podUID="5da2b6a6-e385-40b1-9a80-4ec5c268d043" containerName="route-controller-manager" containerID="cri-o://c4d6ddd6e5d7246f982a53c4a512769531e399e46a8a903f82e89d7a4ee0e3a7" gracePeriod=30 Mar 19 16:44:45 crc kubenswrapper[4792]: I0319 16:44:45.702429 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:45 crc kubenswrapper[4792]: I0319 16:44:45.704649 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 16:44:48 crc kubenswrapper[4792]: E0319 16:44:48.969520 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfa468d_32df_43ac_8884_40aad47fd099.slice\": RecentStats: unable to find data in memory cache]" Mar 19 16:44:49 crc kubenswrapper[4792]: I0319 16:44:49.560046 4792 patch_prober.go:28] interesting pod/controller-manager-5bb64b865b-wvmq6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:44:49 crc kubenswrapper[4792]: I0319 16:44:49.560132 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" podUID="a2d1a8e0-5e00-4f64-b171-213fa622a25d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 16:44:49 crc kubenswrapper[4792]: I0319 16:44:49.572321 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:49 crc kubenswrapper[4792]: I0319 16:44:49.578561 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:44:50 crc kubenswrapper[4792]: I0319 16:44:50.231230 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:44:50 crc kubenswrapper[4792]: I0319 16:44:50.231285 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:44:51 crc kubenswrapper[4792]: I0319 16:44:51.194361 4792 patch_prober.go:28] interesting pod/route-controller-manager-8bd46f65c-pgmlx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Mar 19 16:44:51 crc kubenswrapper[4792]: I0319 16:44:51.194740 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" podUID="5da2b6a6-e385-40b1-9a80-4ec5c268d043" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.794372 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.935927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e3a0113e-520f-4ba0-96bd-ac0a33b087ad","Type":"ContainerDied","Data":"9beb835253ab20d9fbe33dcf9c022545846e3f694fa9986695f44f6a1a86cb8a"} Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.936353 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9beb835253ab20d9fbe33dcf9c022545846e3f694fa9986695f44f6a1a86cb8a" Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.936018 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.937441 4792 generic.go:334] "Generic (PLEG): container finished" podID="5da2b6a6-e385-40b1-9a80-4ec5c268d043" containerID="c4d6ddd6e5d7246f982a53c4a512769531e399e46a8a903f82e89d7a4ee0e3a7" exitCode=0 Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.937483 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" event={"ID":"5da2b6a6-e385-40b1-9a80-4ec5c268d043","Type":"ContainerDied","Data":"c4d6ddd6e5d7246f982a53c4a512769531e399e46a8a903f82e89d7a4ee0e3a7"} Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.958815 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kubelet-dir\") pod \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\" (UID: \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\") " Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.958904 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kube-api-access\") pod \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\" (UID: \"e3a0113e-520f-4ba0-96bd-ac0a33b087ad\") " Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.958950 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e3a0113e-520f-4ba0-96bd-ac0a33b087ad" (UID: "e3a0113e-520f-4ba0-96bd-ac0a33b087ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.959319 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:56 crc kubenswrapper[4792]: I0319 16:44:56.966851 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e3a0113e-520f-4ba0-96bd-ac0a33b087ad" (UID: "e3a0113e-520f-4ba0-96bd-ac0a33b087ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:44:57 crc kubenswrapper[4792]: I0319 16:44:57.060566 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3a0113e-520f-4ba0-96bd-ac0a33b087ad-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 19 16:44:57 crc kubenswrapper[4792]: I0319 16:44:57.944041 4792 generic.go:334] "Generic (PLEG): container finished" podID="a2d1a8e0-5e00-4f64-b171-213fa622a25d" containerID="e7b6555919034a57ec04c910b109f29abd54d494255aebd52a7550c7607655e8" exitCode=0 Mar 19 16:44:57 crc kubenswrapper[4792]: I0319 16:44:57.944096 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" event={"ID":"a2d1a8e0-5e00-4f64-b171-213fa622a25d","Type":"ContainerDied","Data":"e7b6555919034a57ec04c910b109f29abd54d494255aebd52a7550c7607655e8"} Mar 19 16:44:58 crc kubenswrapper[4792]: I0319 16:44:58.560186 4792 patch_prober.go:28] interesting pod/controller-manager-5bb64b865b-wvmq6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 19 16:44:58 crc kubenswrapper[4792]: I0319 16:44:58.560274 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" podUID="a2d1a8e0-5e00-4f64-b171-213fa622a25d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 19 16:44:59 crc kubenswrapper[4792]: E0319 16:44:59.101390 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfa468d_32df_43ac_8884_40aad47fd099.slice\": RecentStats: unable to find data in memory cache]" Mar 19 16:44:59 crc kubenswrapper[4792]: I0319 16:44:59.828087 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.129702 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt"] Mar 19 16:45:00 crc kubenswrapper[4792]: E0319 16:45:00.130172 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422112a2-a6c2-4d09-aaeb-e4f9924ed96e" containerName="oc" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130185 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="422112a2-a6c2-4d09-aaeb-e4f9924ed96e" containerName="oc" Mar 19 16:45:00 crc kubenswrapper[4792]: E0319 16:45:00.130192 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a0113e-520f-4ba0-96bd-ac0a33b087ad" containerName="pruner" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130198 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a0113e-520f-4ba0-96bd-ac0a33b087ad" containerName="pruner" Mar 19 16:45:00 crc kubenswrapper[4792]: E0319 16:45:00.130210 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecfa468d-32df-43ac-8884-40aad47fd099" containerName="collect-profiles" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130215 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecfa468d-32df-43ac-8884-40aad47fd099" containerName="collect-profiles" Mar 19 16:45:00 crc kubenswrapper[4792]: E0319 16:45:00.130235 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5" containerName="pruner" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130240 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5" containerName="pruner" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130324 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecfa468d-32df-43ac-8884-40aad47fd099" containerName="collect-profiles" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130335 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="422112a2-a6c2-4d09-aaeb-e4f9924ed96e" containerName="oc" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130345 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a0113e-520f-4ba0-96bd-ac0a33b087ad" containerName="pruner" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130355 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c02b85-1a6e-4c84-9ea4-48f063bd9ed5" containerName="pruner" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.130685 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.133270 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.133280 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.137518 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt"] Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.324579 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041d9c13-d181-48a0-bab9-efb2d845d365-config-volume\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.324637 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041d9c13-d181-48a0-bab9-efb2d845d365-secret-volume\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.324703 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvt2h\" (UniqueName: \"kubernetes.io/projected/041d9c13-d181-48a0-bab9-efb2d845d365-kube-api-access-nvt2h\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.425784 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041d9c13-d181-48a0-bab9-efb2d845d365-config-volume\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.425942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041d9c13-d181-48a0-bab9-efb2d845d365-secret-volume\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.426085 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvt2h\" (UniqueName: \"kubernetes.io/projected/041d9c13-d181-48a0-bab9-efb2d845d365-kube-api-access-nvt2h\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.427005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041d9c13-d181-48a0-bab9-efb2d845d365-config-volume\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.434583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041d9c13-d181-48a0-bab9-efb2d845d365-secret-volume\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.458962 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvt2h\" (UniqueName: \"kubernetes.io/projected/041d9c13-d181-48a0-bab9-efb2d845d365-kube-api-access-nvt2h\") pod \"collect-profiles-29565645-bzhrt\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:00 crc kubenswrapper[4792]: I0319 16:45:00.754542 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:01 crc kubenswrapper[4792]: I0319 16:45:01.195168 4792 patch_prober.go:28] interesting pod/route-controller-manager-8bd46f65c-pgmlx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Mar 19 16:45:01 crc kubenswrapper[4792]: I0319 16:45:01.195293 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" podUID="5da2b6a6-e385-40b1-9a80-4ec5c268d043" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Mar 19 16:45:01 crc kubenswrapper[4792]: I0319 16:45:01.747073 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.172748 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.691819 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.695248 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.716411 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm"] Mar 19 16:45:04 crc kubenswrapper[4792]: E0319 16:45:04.722828 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da2b6a6-e385-40b1-9a80-4ec5c268d043" containerName="route-controller-manager" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.722867 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da2b6a6-e385-40b1-9a80-4ec5c268d043" containerName="route-controller-manager" Mar 19 16:45:04 crc kubenswrapper[4792]: E0319 16:45:04.722880 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d1a8e0-5e00-4f64-b171-213fa622a25d" containerName="controller-manager" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.722887 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d1a8e0-5e00-4f64-b171-213fa622a25d" containerName="controller-manager" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.722993 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da2b6a6-e385-40b1-9a80-4ec5c268d043" containerName="route-controller-manager" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.723005 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d1a8e0-5e00-4f64-b171-213fa622a25d" containerName="controller-manager" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.724379 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.753885 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm"] Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.791471 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da2b6a6-e385-40b1-9a80-4ec5c268d043-serving-cert\") pod \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.791576 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-config\") pod \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.791612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wpj2\" (UniqueName: \"kubernetes.io/projected/5da2b6a6-e385-40b1-9a80-4ec5c268d043-kube-api-access-4wpj2\") pod \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.791645 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-client-ca\") pod \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\" (UID: \"5da2b6a6-e385-40b1-9a80-4ec5c268d043\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.792650 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-config" (OuterVolumeSpecName: "config") pod "5da2b6a6-e385-40b1-9a80-4ec5c268d043" (UID: "5da2b6a6-e385-40b1-9a80-4ec5c268d043"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.792822 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-client-ca" (OuterVolumeSpecName: "client-ca") pod "5da2b6a6-e385-40b1-9a80-4ec5c268d043" (UID: "5da2b6a6-e385-40b1-9a80-4ec5c268d043"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.796983 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da2b6a6-e385-40b1-9a80-4ec5c268d043-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5da2b6a6-e385-40b1-9a80-4ec5c268d043" (UID: "5da2b6a6-e385-40b1-9a80-4ec5c268d043"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.810905 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da2b6a6-e385-40b1-9a80-4ec5c268d043-kube-api-access-4wpj2" (OuterVolumeSpecName: "kube-api-access-4wpj2") pod "5da2b6a6-e385-40b1-9a80-4ec5c268d043" (UID: "5da2b6a6-e385-40b1-9a80-4ec5c268d043"). InnerVolumeSpecName "kube-api-access-4wpj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.892686 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkwkf\" (UniqueName: \"kubernetes.io/projected/a2d1a8e0-5e00-4f64-b171-213fa622a25d-kube-api-access-fkwkf\") pod \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.892746 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-config\") pod \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.892863 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-proxy-ca-bundles\") pod \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.892903 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2d1a8e0-5e00-4f64-b171-213fa622a25d-serving-cert\") pod \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.892926 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-client-ca\") pod \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\" (UID: \"a2d1a8e0-5e00-4f64-b171-213fa622a25d\") " Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893099 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-config\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893138 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-client-ca\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893206 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkr88\" (UniqueName: \"kubernetes.io/projected/e327c16b-6fcf-464e-b607-a5971dbcd7e8-kube-api-access-vkr88\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e327c16b-6fcf-464e-b607-a5971dbcd7e8-serving-cert\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893273 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893285 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da2b6a6-e385-40b1-9a80-4ec5c268d043-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893294 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da2b6a6-e385-40b1-9a80-4ec5c268d043-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893303 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wpj2\" (UniqueName: \"kubernetes.io/projected/5da2b6a6-e385-40b1-9a80-4ec5c268d043-kube-api-access-4wpj2\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893766 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a2d1a8e0-5e00-4f64-b171-213fa622a25d" (UID: "a2d1a8e0-5e00-4f64-b171-213fa622a25d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.893783 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-config" (OuterVolumeSpecName: "config") pod "a2d1a8e0-5e00-4f64-b171-213fa622a25d" (UID: "a2d1a8e0-5e00-4f64-b171-213fa622a25d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.894107 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-client-ca" (OuterVolumeSpecName: "client-ca") pod "a2d1a8e0-5e00-4f64-b171-213fa622a25d" (UID: "a2d1a8e0-5e00-4f64-b171-213fa622a25d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.896072 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d1a8e0-5e00-4f64-b171-213fa622a25d-kube-api-access-fkwkf" (OuterVolumeSpecName: "kube-api-access-fkwkf") pod "a2d1a8e0-5e00-4f64-b171-213fa622a25d" (UID: "a2d1a8e0-5e00-4f64-b171-213fa622a25d"). InnerVolumeSpecName "kube-api-access-fkwkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.896467 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d1a8e0-5e00-4f64-b171-213fa622a25d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a2d1a8e0-5e00-4f64-b171-213fa622a25d" (UID: "a2d1a8e0-5e00-4f64-b171-213fa622a25d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.982523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" event={"ID":"a2d1a8e0-5e00-4f64-b171-213fa622a25d","Type":"ContainerDied","Data":"6df8ba1fc37fc5afe553871543f162358f589b1a7aadae6f32c8669632408606"} Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.982570 4792 scope.go:117] "RemoveContainer" containerID="e7b6555919034a57ec04c910b109f29abd54d494255aebd52a7550c7607655e8" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.982668 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb64b865b-wvmq6" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.984560 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" event={"ID":"5da2b6a6-e385-40b1-9a80-4ec5c268d043","Type":"ContainerDied","Data":"d2135111696e22cff0ef4f490cf6033dd3667a394dce26977d2399620ae9271c"} Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.984717 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994449 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkr88\" (UniqueName: \"kubernetes.io/projected/e327c16b-6fcf-464e-b607-a5971dbcd7e8-kube-api-access-vkr88\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994485 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e327c16b-6fcf-464e-b607-a5971dbcd7e8-serving-cert\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994512 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-config\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-client-ca\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994607 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkwkf\" (UniqueName: \"kubernetes.io/projected/a2d1a8e0-5e00-4f64-b171-213fa622a25d-kube-api-access-fkwkf\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994619 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994629 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994639 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2d1a8e0-5e00-4f64-b171-213fa622a25d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.994648 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2d1a8e0-5e00-4f64-b171-213fa622a25d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.996400 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-config\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.998024 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-client-ca\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:04 crc kubenswrapper[4792]: I0319 16:45:04.998405 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e327c16b-6fcf-464e-b607-a5971dbcd7e8-serving-cert\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.009411 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bb64b865b-wvmq6"] Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.010536 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkr88\" (UniqueName: \"kubernetes.io/projected/e327c16b-6fcf-464e-b607-a5971dbcd7e8-kube-api-access-vkr88\") pod \"route-controller-manager-8468f78df8-dpjgm\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.013806 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bb64b865b-wvmq6"] Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.021239 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx"] Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.025101 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bd46f65c-pgmlx"] Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.045707 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.454057 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bf64c974c-tgwvr"] Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.454694 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.456418 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.461135 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.461207 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.461460 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.464773 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.465000 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.465167 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.468245 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bf64c974c-tgwvr"] Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.528451 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm"] Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.602683 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-config\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.602729 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-proxy-ca-bundles\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.602802 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-client-ca\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.602939 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hlth\" (UniqueName: \"kubernetes.io/projected/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-kube-api-access-5hlth\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.602999 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-serving-cert\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.704292 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-client-ca\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.704597 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hlth\" (UniqueName: \"kubernetes.io/projected/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-kube-api-access-5hlth\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.704628 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-serving-cert\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.704663 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-config\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.704685 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-proxy-ca-bundles\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.705738 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-proxy-ca-bundles\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.706027 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-config\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.707408 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-client-ca\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.711495 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-serving-cert\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.719265 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hlth\" (UniqueName: \"kubernetes.io/projected/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-kube-api-access-5hlth\") pod \"controller-manager-6bf64c974c-tgwvr\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.747887 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da2b6a6-e385-40b1-9a80-4ec5c268d043" path="/var/lib/kubelet/pods/5da2b6a6-e385-40b1-9a80-4ec5c268d043/volumes" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.748759 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d1a8e0-5e00-4f64-b171-213fa622a25d" path="/var/lib/kubelet/pods/a2d1a8e0-5e00-4f64-b171-213fa622a25d/volumes" Mar 19 16:45:05 crc kubenswrapper[4792]: I0319 16:45:05.768608 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:07 crc kubenswrapper[4792]: E0319 16:45:07.866152 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 19 16:45:07 crc kubenswrapper[4792]: E0319 16:45:07.866749 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-778j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mzjbz_openshift-marketplace(ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 16:45:07 crc kubenswrapper[4792]: E0319 16:45:07.867822 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mzjbz" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" Mar 19 16:45:09 crc kubenswrapper[4792]: E0319 16:45:09.199768 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfa468d_32df_43ac_8884_40aad47fd099.slice\": RecentStats: unable to find data in memory cache]" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.253128 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.254278 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.257753 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.258243 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.258421 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.347190 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2t8f8"] Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.366681 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.366737 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.468364 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.468494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.468588 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.524734 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:10 crc kubenswrapper[4792]: I0319 16:45:10.606190 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:11 crc kubenswrapper[4792]: E0319 16:45:11.618055 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mzjbz" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" Mar 19 16:45:11 crc kubenswrapper[4792]: E0319 16:45:11.695036 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 19 16:45:11 crc kubenswrapper[4792]: E0319 16:45:11.695177 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqvhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dhxns_openshift-marketplace(e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 16:45:11 crc kubenswrapper[4792]: E0319 16:45:11.696348 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dhxns" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" Mar 19 16:45:14 crc kubenswrapper[4792]: E0319 16:45:14.697789 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dhxns" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" Mar 19 16:45:14 crc kubenswrapper[4792]: I0319 16:45:14.968181 4792 scope.go:117] "RemoveContainer" containerID="c4d6ddd6e5d7246f982a53c4a512769531e399e46a8a903f82e89d7a4ee0e3a7" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.054583 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.054869 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgzrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n5pth_openshift-marketplace(7b49f828-0dec-4a3f-9247-7ef8b8882b52): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.056349 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n5pth" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.112160 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.112628 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kj452,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fd9rl_openshift-marketplace(39daf6b3-68ce-429a-b454-1a07c6706a9e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.113944 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fd9rl" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.218011 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.218189 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjtvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4724c_openshift-marketplace(f04d1453-ed31-4e0f-a10c-89ebac7f8f51): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.219523 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4724c" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.222268 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt"] Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.258517 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.271373 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.298822 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.342704 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.342927 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwssq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-25ctb_openshift-marketplace(de7d0c67-0339-42c9-8330-f80dfd39c860): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.344498 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-25ctb" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.417536 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.418134 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xh7j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r4272_openshift-marketplace(2e59df38-8404-4664-96d7-481e34988bee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 16:45:15 crc kubenswrapper[4792]: E0319 16:45:15.419307 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r4272" podUID="2e59df38-8404-4664-96d7-481e34988bee" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.436486 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.436602 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.436743 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-var-lock\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.457704 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm"] Mar 19 16:45:15 crc kubenswrapper[4792]: W0319 16:45:15.464089 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode327c16b_6fcf_464e_b607_a5971dbcd7e8.slice/crio-6086babf1b7dbf7717bd3504b1776a99f1da2bfb1eeba7c3a325d2c84f8dd4be WatchSource:0}: Error finding container 6086babf1b7dbf7717bd3504b1776a99f1da2bfb1eeba7c3a325d2c84f8dd4be: Status 404 returned error can't find the container with id 6086babf1b7dbf7717bd3504b1776a99f1da2bfb1eeba7c3a325d2c84f8dd4be Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.470001 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.536262 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bf64c974c-tgwvr"] Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.538294 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.538324 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.538369 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.538459 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-var-lock\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.538544 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-var-lock\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: W0319 16:45:15.545727 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b44e89a_8408_4aa4_a28a_1c38f802a3f0.slice/crio-c43f6011a0b90cc147cfc70dc60fa41224fa40255a26e9e8e1cd513bafe90630 WatchSource:0}: Error finding container c43f6011a0b90cc147cfc70dc60fa41224fa40255a26e9e8e1cd513bafe90630: Status 404 returned error can't find the container with id c43f6011a0b90cc147cfc70dc60fa41224fa40255a26e9e8e1cd513bafe90630 Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.558649 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.603770 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:15 crc kubenswrapper[4792]: I0319 16:45:15.816568 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 19 16:45:15 crc kubenswrapper[4792]: W0319 16:45:15.824458 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddcdf5c25_1486_4e52_ba7a_381ceb4d6521.slice/crio-0eb4fdc8ee6864c891d58a9f3a8acabb023493310560767bf68000aac630f2b3 WatchSource:0}: Error finding container 0eb4fdc8ee6864c891d58a9f3a8acabb023493310560767bf68000aac630f2b3: Status 404 returned error can't find the container with id 0eb4fdc8ee6864c891d58a9f3a8acabb023493310560767bf68000aac630f2b3 Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.058337 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65","Type":"ContainerStarted","Data":"7bd4005fdd54ac51f72d1b4c3ce02ac35f9ca9bc91ee4df57fe7f92dba54ecb7"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.058949 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65","Type":"ContainerStarted","Data":"5d1637c6823e78e83c9240dc4a6e4e0c1ec5693e0345fdbd9631c83d8ba83c37"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.061886 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcdf5c25-1486-4e52-ba7a-381ceb4d6521","Type":"ContainerStarted","Data":"0eb4fdc8ee6864c891d58a9f3a8acabb023493310560767bf68000aac630f2b3"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.063523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" event={"ID":"e327c16b-6fcf-464e-b607-a5971dbcd7e8","Type":"ContainerStarted","Data":"0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.063601 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" event={"ID":"e327c16b-6fcf-464e-b607-a5971dbcd7e8","Type":"ContainerStarted","Data":"6086babf1b7dbf7717bd3504b1776a99f1da2bfb1eeba7c3a325d2c84f8dd4be"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.063626 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" podUID="e327c16b-6fcf-464e-b607-a5971dbcd7e8" containerName="route-controller-manager" containerID="cri-o://0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0" gracePeriod=30 Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.063890 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.066954 4792 generic.go:334] "Generic (PLEG): container finished" podID="041d9c13-d181-48a0-bab9-efb2d845d365" containerID="66405a81fd87ca24c460c282ef855c2b208731b75db2be3cf5d0f7c4b953da3f" exitCode=0 Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.067029 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" event={"ID":"041d9c13-d181-48a0-bab9-efb2d845d365","Type":"ContainerDied","Data":"66405a81fd87ca24c460c282ef855c2b208731b75db2be3cf5d0f7c4b953da3f"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.067054 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" event={"ID":"041d9c13-d181-48a0-bab9-efb2d845d365","Type":"ContainerStarted","Data":"ae6729706a7667fd63402872c972dee5b863cf38ebfbd2c1a2eb251625b7d40b"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.070777 4792 generic.go:334] "Generic (PLEG): container finished" podID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerID="05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe" exitCode=0 Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.070866 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwbvn" event={"ID":"efcab6c7-88f0-4335-a972-bdd8933433dc","Type":"ContainerDied","Data":"05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.075307 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" event={"ID":"2b44e89a-8408-4aa4-a28a-1c38f802a3f0","Type":"ContainerStarted","Data":"3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.075342 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" event={"ID":"2b44e89a-8408-4aa4-a28a-1c38f802a3f0","Type":"ContainerStarted","Data":"c43f6011a0b90cc147cfc70dc60fa41224fa40255a26e9e8e1cd513bafe90630"} Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.075360 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:16 crc kubenswrapper[4792]: E0319 16:45:16.075903 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r4272" podUID="2e59df38-8404-4664-96d7-481e34988bee" Mar 19 16:45:16 crc kubenswrapper[4792]: E0319 16:45:16.075970 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fd9rl" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" Mar 19 16:45:16 crc kubenswrapper[4792]: E0319 16:45:16.077966 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n5pth" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" Mar 19 16:45:16 crc kubenswrapper[4792]: E0319 16:45:16.078034 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-25ctb" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" Mar 19 16:45:16 crc kubenswrapper[4792]: E0319 16:45:16.078286 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4724c" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.097757 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.116498 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=6.1164681850000004 podStartE2EDuration="6.116468185s" podCreationTimestamp="2026-03-19 16:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:16.090238907 +0000 UTC m=+279.236296447" watchObservedRunningTime="2026-03-19 16:45:16.116468185 +0000 UTC m=+279.262525715" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.130951 4792 patch_prober.go:28] interesting pod/route-controller-manager-8468f78df8-dpjgm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:43402->10.217.0.58:8443: read: connection reset by peer" start-of-body= Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.131074 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" podUID="e327c16b-6fcf-464e-b607-a5971dbcd7e8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:43402->10.217.0.58:8443: read: connection reset by peer" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.193933 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" podStartSLOduration=31.193918476 podStartE2EDuration="31.193918476s" podCreationTimestamp="2026-03-19 16:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:16.19151861 +0000 UTC m=+279.337576150" watchObservedRunningTime="2026-03-19 16:45:16.193918476 +0000 UTC m=+279.339976016" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.234109 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" podStartSLOduration=11.234071616 podStartE2EDuration="11.234071616s" podCreationTimestamp="2026-03-19 16:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:16.229067978 +0000 UTC m=+279.375125518" watchObservedRunningTime="2026-03-19 16:45:16.234071616 +0000 UTC m=+279.380129156" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.507283 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8468f78df8-dpjgm_e327c16b-6fcf-464e-b607-a5971dbcd7e8/route-controller-manager/0.log" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.507605 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.550432 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v"] Mar 19 16:45:16 crc kubenswrapper[4792]: E0319 16:45:16.550791 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e327c16b-6fcf-464e-b607-a5971dbcd7e8" containerName="route-controller-manager" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.550821 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e327c16b-6fcf-464e-b607-a5971dbcd7e8" containerName="route-controller-manager" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.551022 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e327c16b-6fcf-464e-b607-a5971dbcd7e8" containerName="route-controller-manager" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.551648 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.555097 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v"] Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.653708 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-config\") pod \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.653804 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkr88\" (UniqueName: \"kubernetes.io/projected/e327c16b-6fcf-464e-b607-a5971dbcd7e8-kube-api-access-vkr88\") pod \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.653943 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e327c16b-6fcf-464e-b607-a5971dbcd7e8-serving-cert\") pod \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.653998 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-client-ca\") pod \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\" (UID: \"e327c16b-6fcf-464e-b607-a5971dbcd7e8\") " Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.654190 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-client-ca\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.654224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af642110-a473-4483-8475-60439fced495-serving-cert\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.654306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdhb5\" (UniqueName: \"kubernetes.io/projected/af642110-a473-4483-8475-60439fced495-kube-api-access-jdhb5\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.654353 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-config\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.654568 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "e327c16b-6fcf-464e-b607-a5971dbcd7e8" (UID: "e327c16b-6fcf-464e-b607-a5971dbcd7e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.654588 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-config" (OuterVolumeSpecName: "config") pod "e327c16b-6fcf-464e-b607-a5971dbcd7e8" (UID: "e327c16b-6fcf-464e-b607-a5971dbcd7e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.660532 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e327c16b-6fcf-464e-b607-a5971dbcd7e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e327c16b-6fcf-464e-b607-a5971dbcd7e8" (UID: "e327c16b-6fcf-464e-b607-a5971dbcd7e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.661049 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e327c16b-6fcf-464e-b607-a5971dbcd7e8-kube-api-access-vkr88" (OuterVolumeSpecName: "kube-api-access-vkr88") pod "e327c16b-6fcf-464e-b607-a5971dbcd7e8" (UID: "e327c16b-6fcf-464e-b607-a5971dbcd7e8"). InnerVolumeSpecName "kube-api-access-vkr88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.756034 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdhb5\" (UniqueName: \"kubernetes.io/projected/af642110-a473-4483-8475-60439fced495-kube-api-access-jdhb5\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.756093 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-config\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.756147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-client-ca\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.756174 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af642110-a473-4483-8475-60439fced495-serving-cert\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.756243 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.756260 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e327c16b-6fcf-464e-b607-a5971dbcd7e8-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.756274 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkr88\" (UniqueName: \"kubernetes.io/projected/e327c16b-6fcf-464e-b607-a5971dbcd7e8-kube-api-access-vkr88\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.756286 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e327c16b-6fcf-464e-b607-a5971dbcd7e8-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.758164 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-config\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.758635 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-client-ca\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.763408 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af642110-a473-4483-8475-60439fced495-serving-cert\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.771348 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdhb5\" (UniqueName: \"kubernetes.io/projected/af642110-a473-4483-8475-60439fced495-kube-api-access-jdhb5\") pod \"route-controller-manager-86698db569-zvd7v\" (UID: \"af642110-a473-4483-8475-60439fced495\") " pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:16 crc kubenswrapper[4792]: I0319 16:45:16.896609 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.083042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcdf5c25-1486-4e52-ba7a-381ceb4d6521","Type":"ContainerStarted","Data":"3d889e5ac0d4dcf5b8311946f2dee74ffd9b00280b313a85f79e448519d3fbcf"} Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.085143 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8468f78df8-dpjgm_e327c16b-6fcf-464e-b607-a5971dbcd7e8/route-controller-manager/0.log" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.085179 4792 generic.go:334] "Generic (PLEG): container finished" podID="e327c16b-6fcf-464e-b607-a5971dbcd7e8" containerID="0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0" exitCode=255 Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.085222 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" event={"ID":"e327c16b-6fcf-464e-b607-a5971dbcd7e8","Type":"ContainerDied","Data":"0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0"} Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.085243 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" event={"ID":"e327c16b-6fcf-464e-b607-a5971dbcd7e8","Type":"ContainerDied","Data":"6086babf1b7dbf7717bd3504b1776a99f1da2bfb1eeba7c3a325d2c84f8dd4be"} Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.085263 4792 scope.go:117] "RemoveContainer" containerID="0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.085376 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.096170 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwbvn" event={"ID":"efcab6c7-88f0-4335-a972-bdd8933433dc","Type":"ContainerStarted","Data":"b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c"} Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.100530 4792 generic.go:334] "Generic (PLEG): container finished" podID="d7508b1a-ec4e-4b8d-9c36-2b22aecabe65" containerID="7bd4005fdd54ac51f72d1b4c3ce02ac35f9ca9bc91ee4df57fe7f92dba54ecb7" exitCode=0 Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.101066 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65","Type":"ContainerDied","Data":"7bd4005fdd54ac51f72d1b4c3ce02ac35f9ca9bc91ee4df57fe7f92dba54ecb7"} Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.111779 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.111759641 podStartE2EDuration="2.111759641s" podCreationTimestamp="2026-03-19 16:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:17.105745516 +0000 UTC m=+280.251803076" watchObservedRunningTime="2026-03-19 16:45:17.111759641 +0000 UTC m=+280.257817181" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.122576 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v"] Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.122950 4792 scope.go:117] "RemoveContainer" containerID="0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0" Mar 19 16:45:17 crc kubenswrapper[4792]: E0319 16:45:17.123683 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0\": container with ID starting with 0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0 not found: ID does not exist" containerID="0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.123724 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0"} err="failed to get container status \"0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0\": rpc error: code = NotFound desc = could not find container \"0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0\": container with ID starting with 0509297ea7f2a85631a9501129aaf63cbf98482489de736bf3f0d2a8e9b987b0 not found: ID does not exist" Mar 19 16:45:17 crc kubenswrapper[4792]: W0319 16:45:17.126688 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf642110_a473_4483_8475_60439fced495.slice/crio-a149070c9cfd91f79707b4d9455093c583e0d73e59cbefcce622df06520034db WatchSource:0}: Error finding container a149070c9cfd91f79707b4d9455093c583e0d73e59cbefcce622df06520034db: Status 404 returned error can't find the container with id a149070c9cfd91f79707b4d9455093c583e0d73e59cbefcce622df06520034db Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.132103 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qwbvn" podStartSLOduration=7.355654269 podStartE2EDuration="44.132086487s" podCreationTimestamp="2026-03-19 16:44:33 +0000 UTC" firstStartedPulling="2026-03-19 16:44:39.751206176 +0000 UTC m=+242.897263716" lastFinishedPulling="2026-03-19 16:45:16.527638394 +0000 UTC m=+279.673695934" observedRunningTime="2026-03-19 16:45:17.129366703 +0000 UTC m=+280.275424263" watchObservedRunningTime="2026-03-19 16:45:17.132086487 +0000 UTC m=+280.278144027" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.163389 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm"] Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.166094 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8468f78df8-dpjgm"] Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.349876 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.466405 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvt2h\" (UniqueName: \"kubernetes.io/projected/041d9c13-d181-48a0-bab9-efb2d845d365-kube-api-access-nvt2h\") pod \"041d9c13-d181-48a0-bab9-efb2d845d365\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.466553 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041d9c13-d181-48a0-bab9-efb2d845d365-config-volume\") pod \"041d9c13-d181-48a0-bab9-efb2d845d365\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.466604 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041d9c13-d181-48a0-bab9-efb2d845d365-secret-volume\") pod \"041d9c13-d181-48a0-bab9-efb2d845d365\" (UID: \"041d9c13-d181-48a0-bab9-efb2d845d365\") " Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.467435 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041d9c13-d181-48a0-bab9-efb2d845d365-config-volume" (OuterVolumeSpecName: "config-volume") pod "041d9c13-d181-48a0-bab9-efb2d845d365" (UID: "041d9c13-d181-48a0-bab9-efb2d845d365"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.474111 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041d9c13-d181-48a0-bab9-efb2d845d365-kube-api-access-nvt2h" (OuterVolumeSpecName: "kube-api-access-nvt2h") pod "041d9c13-d181-48a0-bab9-efb2d845d365" (UID: "041d9c13-d181-48a0-bab9-efb2d845d365"). InnerVolumeSpecName "kube-api-access-nvt2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.474317 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041d9c13-d181-48a0-bab9-efb2d845d365-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "041d9c13-d181-48a0-bab9-efb2d845d365" (UID: "041d9c13-d181-48a0-bab9-efb2d845d365"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.568387 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041d9c13-d181-48a0-bab9-efb2d845d365-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.568905 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041d9c13-d181-48a0-bab9-efb2d845d365-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.568919 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvt2h\" (UniqueName: \"kubernetes.io/projected/041d9c13-d181-48a0-bab9-efb2d845d365-kube-api-access-nvt2h\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:17 crc kubenswrapper[4792]: I0319 16:45:17.748381 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e327c16b-6fcf-464e-b607-a5971dbcd7e8" path="/var/lib/kubelet/pods/e327c16b-6fcf-464e-b607-a5971dbcd7e8/volumes" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.108076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" event={"ID":"af642110-a473-4483-8475-60439fced495","Type":"ContainerStarted","Data":"c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d"} Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.108147 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" event={"ID":"af642110-a473-4483-8475-60439fced495","Type":"ContainerStarted","Data":"a149070c9cfd91f79707b4d9455093c583e0d73e59cbefcce622df06520034db"} Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.108374 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.111480 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.111492 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt" event={"ID":"041d9c13-d181-48a0-bab9-efb2d845d365","Type":"ContainerDied","Data":"ae6729706a7667fd63402872c972dee5b863cf38ebfbd2c1a2eb251625b7d40b"} Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.111541 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae6729706a7667fd63402872c972dee5b863cf38ebfbd2c1a2eb251625b7d40b" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.118113 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.126528 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" podStartSLOduration=13.126499609 podStartE2EDuration="13.126499609s" podCreationTimestamp="2026-03-19 16:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:18.123392264 +0000 UTC m=+281.269449794" watchObservedRunningTime="2026-03-19 16:45:18.126499609 +0000 UTC m=+281.272557169" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.369154 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.478204 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kubelet-dir\") pod \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\" (UID: \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\") " Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.478257 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kube-api-access\") pod \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\" (UID: \"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65\") " Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.478353 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7508b1a-ec4e-4b8d-9c36-2b22aecabe65" (UID: "d7508b1a-ec4e-4b8d-9c36-2b22aecabe65"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.478509 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.483180 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7508b1a-ec4e-4b8d-9c36-2b22aecabe65" (UID: "d7508b1a-ec4e-4b8d-9c36-2b22aecabe65"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:18 crc kubenswrapper[4792]: I0319 16:45:18.581250 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7508b1a-ec4e-4b8d-9c36-2b22aecabe65-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:19 crc kubenswrapper[4792]: I0319 16:45:19.118606 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 19 16:45:19 crc kubenswrapper[4792]: I0319 16:45:19.118598 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d7508b1a-ec4e-4b8d-9c36-2b22aecabe65","Type":"ContainerDied","Data":"5d1637c6823e78e83c9240dc4a6e4e0c1ec5693e0345fdbd9631c83d8ba83c37"} Mar 19 16:45:19 crc kubenswrapper[4792]: I0319 16:45:19.118698 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d1637c6823e78e83c9240dc4a6e4e0c1ec5693e0345fdbd9631c83d8ba83c37" Mar 19 16:45:19 crc kubenswrapper[4792]: E0319 16:45:19.334721 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfa468d_32df_43ac_8884_40aad47fd099.slice\": RecentStats: unable to find data in memory cache]" Mar 19 16:45:20 crc kubenswrapper[4792]: I0319 16:45:20.231083 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:45:20 crc kubenswrapper[4792]: I0319 16:45:20.231502 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:45:20 crc kubenswrapper[4792]: I0319 16:45:20.231883 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:45:20 crc kubenswrapper[4792]: I0319 16:45:20.232346 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 16:45:20 crc kubenswrapper[4792]: I0319 16:45:20.232397 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e" gracePeriod=600 Mar 19 16:45:21 crc kubenswrapper[4792]: I0319 16:45:21.129951 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e" exitCode=0 Mar 19 16:45:21 crc kubenswrapper[4792]: I0319 16:45:21.129996 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e"} Mar 19 16:45:22 crc kubenswrapper[4792]: I0319 16:45:22.137578 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"c44ae9d61ca8c53f504eaf0d9805dc6eed17635a96b271ff98bf7bf2821e64ef"} Mar 19 16:45:23 crc kubenswrapper[4792]: I0319 16:45:23.583651 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:45:23 crc kubenswrapper[4792]: I0319 16:45:23.583996 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:45:23 crc kubenswrapper[4792]: I0319 16:45:23.719866 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:45:24 crc kubenswrapper[4792]: I0319 16:45:24.188291 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:45:25 crc kubenswrapper[4792]: I0319 16:45:25.423914 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf64c974c-tgwvr"] Mar 19 16:45:25 crc kubenswrapper[4792]: I0319 16:45:25.424490 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" podUID="2b44e89a-8408-4aa4-a28a-1c38f802a3f0" containerName="controller-manager" containerID="cri-o://3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b" gracePeriod=30 Mar 19 16:45:25 crc kubenswrapper[4792]: I0319 16:45:25.449733 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v"] Mar 19 16:45:25 crc kubenswrapper[4792]: I0319 16:45:25.450009 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" podUID="af642110-a473-4483-8475-60439fced495" containerName="route-controller-manager" containerID="cri-o://c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d" gracePeriod=30 Mar 19 16:45:25 crc kubenswrapper[4792]: I0319 16:45:25.908752 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:25 crc kubenswrapper[4792]: I0319 16:45:25.968735 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068358 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-config\") pod \"af642110-a473-4483-8475-60439fced495\" (UID: \"af642110-a473-4483-8475-60439fced495\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068404 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-client-ca\") pod \"af642110-a473-4483-8475-60439fced495\" (UID: \"af642110-a473-4483-8475-60439fced495\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068452 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdhb5\" (UniqueName: \"kubernetes.io/projected/af642110-a473-4483-8475-60439fced495-kube-api-access-jdhb5\") pod \"af642110-a473-4483-8475-60439fced495\" (UID: \"af642110-a473-4483-8475-60439fced495\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068486 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-config\") pod \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068517 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af642110-a473-4483-8475-60439fced495-serving-cert\") pod \"af642110-a473-4483-8475-60439fced495\" (UID: \"af642110-a473-4483-8475-60439fced495\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068559 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-proxy-ca-bundles\") pod \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068587 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-client-ca\") pod \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068618 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hlth\" (UniqueName: \"kubernetes.io/projected/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-kube-api-access-5hlth\") pod \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.068659 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-serving-cert\") pod \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\" (UID: \"2b44e89a-8408-4aa4-a28a-1c38f802a3f0\") " Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.069303 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-client-ca" (OuterVolumeSpecName: "client-ca") pod "af642110-a473-4483-8475-60439fced495" (UID: "af642110-a473-4483-8475-60439fced495"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.069400 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-config" (OuterVolumeSpecName: "config") pod "af642110-a473-4483-8475-60439fced495" (UID: "af642110-a473-4483-8475-60439fced495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.069873 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b44e89a-8408-4aa4-a28a-1c38f802a3f0" (UID: "2b44e89a-8408-4aa4-a28a-1c38f802a3f0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.069936 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2b44e89a-8408-4aa4-a28a-1c38f802a3f0" (UID: "2b44e89a-8408-4aa4-a28a-1c38f802a3f0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.070236 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-config" (OuterVolumeSpecName: "config") pod "2b44e89a-8408-4aa4-a28a-1c38f802a3f0" (UID: "2b44e89a-8408-4aa4-a28a-1c38f802a3f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.074419 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b44e89a-8408-4aa4-a28a-1c38f802a3f0" (UID: "2b44e89a-8408-4aa4-a28a-1c38f802a3f0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.074440 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af642110-a473-4483-8475-60439fced495-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "af642110-a473-4483-8475-60439fced495" (UID: "af642110-a473-4483-8475-60439fced495"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.075111 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-kube-api-access-5hlth" (OuterVolumeSpecName: "kube-api-access-5hlth") pod "2b44e89a-8408-4aa4-a28a-1c38f802a3f0" (UID: "2b44e89a-8408-4aa4-a28a-1c38f802a3f0"). InnerVolumeSpecName "kube-api-access-5hlth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.075230 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af642110-a473-4483-8475-60439fced495-kube-api-access-jdhb5" (OuterVolumeSpecName: "kube-api-access-jdhb5") pod "af642110-a473-4483-8475-60439fced495" (UID: "af642110-a473-4483-8475-60439fced495"). InnerVolumeSpecName "kube-api-access-jdhb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.166359 4792 generic.go:334] "Generic (PLEG): container finished" podID="af642110-a473-4483-8475-60439fced495" containerID="c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d" exitCode=0 Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.166441 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" event={"ID":"af642110-a473-4483-8475-60439fced495","Type":"ContainerDied","Data":"c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d"} Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.166497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" event={"ID":"af642110-a473-4483-8475-60439fced495","Type":"ContainerDied","Data":"a149070c9cfd91f79707b4d9455093c583e0d73e59cbefcce622df06520034db"} Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.166517 4792 scope.go:117] "RemoveContainer" containerID="c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.166718 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174297 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174348 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174381 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hlth\" (UniqueName: \"kubernetes.io/projected/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-kube-api-access-5hlth\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174441 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174484 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174498 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af642110-a473-4483-8475-60439fced495-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174510 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdhb5\" (UniqueName: \"kubernetes.io/projected/af642110-a473-4483-8475-60439fced495-kube-api-access-jdhb5\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174521 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b44e89a-8408-4aa4-a28a-1c38f802a3f0-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.174532 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af642110-a473-4483-8475-60439fced495-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.175254 4792 generic.go:334] "Generic (PLEG): container finished" podID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerID="d00d17bbbeaa0191d122e2f8b1544420736dcfbfc63faeed4dd74ac9d1b1b38d" exitCode=0 Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.175314 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzjbz" event={"ID":"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2","Type":"ContainerDied","Data":"d00d17bbbeaa0191d122e2f8b1544420736dcfbfc63faeed4dd74ac9d1b1b38d"} Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.179468 4792 generic.go:334] "Generic (PLEG): container finished" podID="2b44e89a-8408-4aa4-a28a-1c38f802a3f0" containerID="3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b" exitCode=0 Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.179495 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" event={"ID":"2b44e89a-8408-4aa4-a28a-1c38f802a3f0","Type":"ContainerDied","Data":"3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b"} Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.179514 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" event={"ID":"2b44e89a-8408-4aa4-a28a-1c38f802a3f0","Type":"ContainerDied","Data":"c43f6011a0b90cc147cfc70dc60fa41224fa40255a26e9e8e1cd513bafe90630"} Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.180072 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.190472 4792 scope.go:117] "RemoveContainer" containerID="c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d" Mar 19 16:45:26 crc kubenswrapper[4792]: E0319 16:45:26.191853 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d\": container with ID starting with c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d not found: ID does not exist" containerID="c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.191887 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d"} err="failed to get container status \"c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d\": rpc error: code = NotFound desc = could not find container \"c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d\": container with ID starting with c1d177675f8ad3ab366bc24585a611ad36893531c83d8cd20668739a63104d0d not found: ID does not exist" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.191910 4792 scope.go:117] "RemoveContainer" containerID="3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.205127 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v"] Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.207353 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86698db569-zvd7v"] Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.210937 4792 scope.go:117] "RemoveContainer" containerID="3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b" Mar 19 16:45:26 crc kubenswrapper[4792]: E0319 16:45:26.211351 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b\": container with ID starting with 3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b not found: ID does not exist" containerID="3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.211387 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b"} err="failed to get container status \"3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b\": rpc error: code = NotFound desc = could not find container \"3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b\": container with ID starting with 3e854eb3a1108dcef092b1b2d5d128b095c8bc1f7f7a60d6f8bf8e08298d360b not found: ID does not exist" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.224817 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf64c974c-tgwvr"] Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.228799 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bf64c974c-tgwvr"] Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.770694 4792 patch_prober.go:28] interesting pod/controller-manager-6bf64c974c-tgwvr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.770783 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6bf64c974c-tgwvr" podUID="2b44e89a-8408-4aa4-a28a-1c38f802a3f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.906894 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6778d545d-69zk9"] Mar 19 16:45:26 crc kubenswrapper[4792]: E0319 16:45:26.907142 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b44e89a-8408-4aa4-a28a-1c38f802a3f0" containerName="controller-manager" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907166 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b44e89a-8408-4aa4-a28a-1c38f802a3f0" containerName="controller-manager" Mar 19 16:45:26 crc kubenswrapper[4792]: E0319 16:45:26.907184 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041d9c13-d181-48a0-bab9-efb2d845d365" containerName="collect-profiles" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907192 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="041d9c13-d181-48a0-bab9-efb2d845d365" containerName="collect-profiles" Mar 19 16:45:26 crc kubenswrapper[4792]: E0319 16:45:26.907203 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7508b1a-ec4e-4b8d-9c36-2b22aecabe65" containerName="pruner" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907211 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7508b1a-ec4e-4b8d-9c36-2b22aecabe65" containerName="pruner" Mar 19 16:45:26 crc kubenswrapper[4792]: E0319 16:45:26.907224 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af642110-a473-4483-8475-60439fced495" containerName="route-controller-manager" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907232 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="af642110-a473-4483-8475-60439fced495" containerName="route-controller-manager" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907356 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="041d9c13-d181-48a0-bab9-efb2d845d365" containerName="collect-profiles" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907374 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b44e89a-8408-4aa4-a28a-1c38f802a3f0" containerName="controller-manager" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907388 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7508b1a-ec4e-4b8d-9c36-2b22aecabe65" containerName="pruner" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907401 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="af642110-a473-4483-8475-60439fced495" containerName="route-controller-manager" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.907828 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.910111 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.910596 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.911492 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.911992 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.913782 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95"] Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.914907 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.917885 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.918210 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.918410 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.918491 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.918599 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.919332 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6778d545d-69zk9"] Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.919901 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.921189 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.921442 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.921472 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 16:45:26 crc kubenswrapper[4792]: I0319 16:45:26.926435 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95"] Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.086065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d354-6c6a-4ccb-8a38-c2b3f794421e-serving-cert\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.086474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-client-ca\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.086541 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-config\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.086594 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-config\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.086623 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-proxy-ca-bundles\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.086647 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxcbs\" (UniqueName: \"kubernetes.io/projected/8227d354-6c6a-4ccb-8a38-c2b3f794421e-kube-api-access-lxcbs\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.086946 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-client-ca\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.087065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5w22\" (UniqueName: \"kubernetes.io/projected/698b2fef-7717-48bc-850e-8e03b673750c-kube-api-access-b5w22\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.087132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/698b2fef-7717-48bc-850e-8e03b673750c-serving-cert\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.186701 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzjbz" event={"ID":"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2","Type":"ContainerStarted","Data":"957009c6a72c10a859d258ee406b298f882aab30ce5b91e8edd8e4e76ba86337"} Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.187810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-config\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.187918 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-config\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.187937 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-proxy-ca-bundles\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.187956 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxcbs\" (UniqueName: \"kubernetes.io/projected/8227d354-6c6a-4ccb-8a38-c2b3f794421e-kube-api-access-lxcbs\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.187994 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-client-ca\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.188014 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5w22\" (UniqueName: \"kubernetes.io/projected/698b2fef-7717-48bc-850e-8e03b673750c-kube-api-access-b5w22\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.188029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/698b2fef-7717-48bc-850e-8e03b673750c-serving-cert\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.188305 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d354-6c6a-4ccb-8a38-c2b3f794421e-serving-cert\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.188338 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-client-ca\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.189192 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-client-ca\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.191165 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-proxy-ca-bundles\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.191808 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-config\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.192214 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-config\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.193565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-client-ca\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.210790 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/698b2fef-7717-48bc-850e-8e03b673750c-serving-cert\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.211487 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d354-6c6a-4ccb-8a38-c2b3f794421e-serving-cert\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.211820 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzjbz" podStartSLOduration=7.291879033 podStartE2EDuration="54.211810461s" podCreationTimestamp="2026-03-19 16:44:33 +0000 UTC" firstStartedPulling="2026-03-19 16:44:39.748920024 +0000 UTC m=+242.894977564" lastFinishedPulling="2026-03-19 16:45:26.668851432 +0000 UTC m=+289.814908992" observedRunningTime="2026-03-19 16:45:27.211094932 +0000 UTC m=+290.357152472" watchObservedRunningTime="2026-03-19 16:45:27.211810461 +0000 UTC m=+290.357868001" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.215171 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5w22\" (UniqueName: \"kubernetes.io/projected/698b2fef-7717-48bc-850e-8e03b673750c-kube-api-access-b5w22\") pod \"controller-manager-6778d545d-69zk9\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.217929 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxcbs\" (UniqueName: \"kubernetes.io/projected/8227d354-6c6a-4ccb-8a38-c2b3f794421e-kube-api-access-lxcbs\") pod \"route-controller-manager-6d564d898c-2kv95\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.228274 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.239032 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.432381 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6778d545d-69zk9"] Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.494889 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95"] Mar 19 16:45:27 crc kubenswrapper[4792]: W0319 16:45:27.510358 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8227d354_6c6a_4ccb_8a38_c2b3f794421e.slice/crio-2589741f428846bcdccc8eaa66405e33bee88ca48e3054b8cc683003c023a9a1 WatchSource:0}: Error finding container 2589741f428846bcdccc8eaa66405e33bee88ca48e3054b8cc683003c023a9a1: Status 404 returned error can't find the container with id 2589741f428846bcdccc8eaa66405e33bee88ca48e3054b8cc683003c023a9a1 Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.746603 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b44e89a-8408-4aa4-a28a-1c38f802a3f0" path="/var/lib/kubelet/pods/2b44e89a-8408-4aa4-a28a-1c38f802a3f0/volumes" Mar 19 16:45:27 crc kubenswrapper[4792]: I0319 16:45:27.747273 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af642110-a473-4483-8475-60439fced495" path="/var/lib/kubelet/pods/af642110-a473-4483-8475-60439fced495/volumes" Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.197009 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhxns" event={"ID":"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb","Type":"ContainerStarted","Data":"3d50d30b3bbe8d0673b373d3f42dafdcd6de9a4f9ca0dc4f1fc6b5f87c70a2dd"} Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.199987 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" event={"ID":"8227d354-6c6a-4ccb-8a38-c2b3f794421e","Type":"ContainerStarted","Data":"dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53"} Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.200027 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" event={"ID":"8227d354-6c6a-4ccb-8a38-c2b3f794421e","Type":"ContainerStarted","Data":"2589741f428846bcdccc8eaa66405e33bee88ca48e3054b8cc683003c023a9a1"} Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.200822 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.202910 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" event={"ID":"698b2fef-7717-48bc-850e-8e03b673750c","Type":"ContainerStarted","Data":"1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f"} Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.202954 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" event={"ID":"698b2fef-7717-48bc-850e-8e03b673750c","Type":"ContainerStarted","Data":"b0f26ceb8db00e1af391f7d963694c4a79a45c5a379db5cea0331206ce1708b9"} Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.203655 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.207211 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.218331 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.236895 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" podStartSLOduration=3.236877171 podStartE2EDuration="3.236877171s" podCreationTimestamp="2026-03-19 16:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:28.236035729 +0000 UTC m=+291.382093259" watchObservedRunningTime="2026-03-19 16:45:28.236877171 +0000 UTC m=+291.382934701" Mar 19 16:45:28 crc kubenswrapper[4792]: I0319 16:45:28.259063 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" podStartSLOduration=3.259048199 podStartE2EDuration="3.259048199s" podCreationTimestamp="2026-03-19 16:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:28.258770951 +0000 UTC m=+291.404828491" watchObservedRunningTime="2026-03-19 16:45:28.259048199 +0000 UTC m=+291.405105739" Mar 19 16:45:29 crc kubenswrapper[4792]: I0319 16:45:29.212096 4792 generic.go:334] "Generic (PLEG): container finished" podID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerID="3d50d30b3bbe8d0673b373d3f42dafdcd6de9a4f9ca0dc4f1fc6b5f87c70a2dd" exitCode=0 Mar 19 16:45:29 crc kubenswrapper[4792]: I0319 16:45:29.212179 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhxns" event={"ID":"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb","Type":"ContainerDied","Data":"3d50d30b3bbe8d0673b373d3f42dafdcd6de9a4f9ca0dc4f1fc6b5f87c70a2dd"} Mar 19 16:45:29 crc kubenswrapper[4792]: I0319 16:45:29.214890 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerID="d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7" exitCode=0 Mar 19 16:45:29 crc kubenswrapper[4792]: I0319 16:45:29.214942 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pth" event={"ID":"7b49f828-0dec-4a3f-9247-7ef8b8882b52","Type":"ContainerDied","Data":"d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7"} Mar 19 16:45:29 crc kubenswrapper[4792]: E0319 16:45:29.465377 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfa468d_32df_43ac_8884_40aad47fd099.slice\": RecentStats: unable to find data in memory cache]" Mar 19 16:45:30 crc kubenswrapper[4792]: I0319 16:45:30.229381 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhxns" event={"ID":"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb","Type":"ContainerStarted","Data":"17ee5a98f9cfdcefe60c3e560ba7c8a48959517fea02779e6883935eaeae2874"} Mar 19 16:45:30 crc kubenswrapper[4792]: I0319 16:45:30.232597 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pth" event={"ID":"7b49f828-0dec-4a3f-9247-7ef8b8882b52","Type":"ContainerStarted","Data":"53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9"} Mar 19 16:45:30 crc kubenswrapper[4792]: I0319 16:45:30.235103 4792 generic.go:334] "Generic (PLEG): container finished" podID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerID="3ccba9e077edf505dd17e05da375461cc1ac536223478483b04486b2602f4fc5" exitCode=0 Mar 19 16:45:30 crc kubenswrapper[4792]: I0319 16:45:30.235179 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd9rl" event={"ID":"39daf6b3-68ce-429a-b454-1a07c6706a9e","Type":"ContainerDied","Data":"3ccba9e077edf505dd17e05da375461cc1ac536223478483b04486b2602f4fc5"} Mar 19 16:45:30 crc kubenswrapper[4792]: I0319 16:45:30.248369 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dhxns" podStartSLOduration=5.410804576 podStartE2EDuration="54.248340456s" podCreationTimestamp="2026-03-19 16:44:36 +0000 UTC" firstStartedPulling="2026-03-19 16:44:40.819235665 +0000 UTC m=+243.965293195" lastFinishedPulling="2026-03-19 16:45:29.656771535 +0000 UTC m=+292.802829075" observedRunningTime="2026-03-19 16:45:30.245384885 +0000 UTC m=+293.391442425" watchObservedRunningTime="2026-03-19 16:45:30.248340456 +0000 UTC m=+293.394398026" Mar 19 16:45:30 crc kubenswrapper[4792]: I0319 16:45:30.261312 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5pth" podStartSLOduration=5.432093042 podStartE2EDuration="55.26129177s" podCreationTimestamp="2026-03-19 16:44:35 +0000 UTC" firstStartedPulling="2026-03-19 16:44:39.75791165 +0000 UTC m=+242.903969190" lastFinishedPulling="2026-03-19 16:45:29.587110368 +0000 UTC m=+292.733167918" observedRunningTime="2026-03-19 16:45:30.260405356 +0000 UTC m=+293.406462926" watchObservedRunningTime="2026-03-19 16:45:30.26129177 +0000 UTC m=+293.407349320" Mar 19 16:45:31 crc kubenswrapper[4792]: I0319 16:45:31.242778 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd9rl" event={"ID":"39daf6b3-68ce-429a-b454-1a07c6706a9e","Type":"ContainerStarted","Data":"221dea5054dcc2c2a028bb8abcfe4a44152940c1deff507ac2dc02bbd84ea42c"} Mar 19 16:45:31 crc kubenswrapper[4792]: I0319 16:45:31.244876 4792 generic.go:334] "Generic (PLEG): container finished" podID="2e59df38-8404-4664-96d7-481e34988bee" containerID="6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d" exitCode=0 Mar 19 16:45:31 crc kubenswrapper[4792]: I0319 16:45:31.244964 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4272" event={"ID":"2e59df38-8404-4664-96d7-481e34988bee","Type":"ContainerDied","Data":"6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d"} Mar 19 16:45:31 crc kubenswrapper[4792]: I0319 16:45:31.246813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ctb" event={"ID":"de7d0c67-0339-42c9-8330-f80dfd39c860","Type":"ContainerStarted","Data":"73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0"} Mar 19 16:45:31 crc kubenswrapper[4792]: I0319 16:45:31.250103 4792 generic.go:334] "Generic (PLEG): container finished" podID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerID="fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd" exitCode=0 Mar 19 16:45:31 crc kubenswrapper[4792]: I0319 16:45:31.250144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4724c" event={"ID":"f04d1453-ed31-4e0f-a10c-89ebac7f8f51","Type":"ContainerDied","Data":"fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd"} Mar 19 16:45:32 crc kubenswrapper[4792]: I0319 16:45:32.261054 4792 generic.go:334] "Generic (PLEG): container finished" podID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerID="73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0" exitCode=0 Mar 19 16:45:32 crc kubenswrapper[4792]: I0319 16:45:32.261130 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ctb" event={"ID":"de7d0c67-0339-42c9-8330-f80dfd39c860","Type":"ContainerDied","Data":"73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0"} Mar 19 16:45:32 crc kubenswrapper[4792]: I0319 16:45:32.277406 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fd9rl" podStartSLOduration=8.360727063 podStartE2EDuration="59.277385791s" podCreationTimestamp="2026-03-19 16:44:33 +0000 UTC" firstStartedPulling="2026-03-19 16:44:39.774233337 +0000 UTC m=+242.920290877" lastFinishedPulling="2026-03-19 16:45:30.690892065 +0000 UTC m=+293.836949605" observedRunningTime="2026-03-19 16:45:32.276077946 +0000 UTC m=+295.422135486" watchObservedRunningTime="2026-03-19 16:45:32.277385791 +0000 UTC m=+295.423443341" Mar 19 16:45:33 crc kubenswrapper[4792]: I0319 16:45:33.294706 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4272" event={"ID":"2e59df38-8404-4664-96d7-481e34988bee","Type":"ContainerStarted","Data":"2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385"} Mar 19 16:45:33 crc kubenswrapper[4792]: I0319 16:45:33.314187 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r4272" podStartSLOduration=6.415301588 podStartE2EDuration="58.314171324s" podCreationTimestamp="2026-03-19 16:44:35 +0000 UTC" firstStartedPulling="2026-03-19 16:44:40.810873696 +0000 UTC m=+243.956931236" lastFinishedPulling="2026-03-19 16:45:32.709743432 +0000 UTC m=+295.855800972" observedRunningTime="2026-03-19 16:45:33.314140583 +0000 UTC m=+296.460198123" watchObservedRunningTime="2026-03-19 16:45:33.314171324 +0000 UTC m=+296.460228864" Mar 19 16:45:33 crc kubenswrapper[4792]: I0319 16:45:33.808699 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:45:33 crc kubenswrapper[4792]: I0319 16:45:33.808771 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:45:33 crc kubenswrapper[4792]: I0319 16:45:33.854061 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:45:33 crc kubenswrapper[4792]: I0319 16:45:33.982434 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:45:33 crc kubenswrapper[4792]: I0319 16:45:33.982504 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:45:34 crc kubenswrapper[4792]: I0319 16:45:34.042077 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:45:34 crc kubenswrapper[4792]: I0319 16:45:34.303157 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4724c" event={"ID":"f04d1453-ed31-4e0f-a10c-89ebac7f8f51","Type":"ContainerStarted","Data":"9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff"} Mar 19 16:45:34 crc kubenswrapper[4792]: I0319 16:45:34.330653 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4724c" podStartSLOduration=5.747651504 podStartE2EDuration="1m1.330618839s" podCreationTimestamp="2026-03-19 16:44:33 +0000 UTC" firstStartedPulling="2026-03-19 16:44:37.515149712 +0000 UTC m=+240.661207252" lastFinishedPulling="2026-03-19 16:45:33.098117047 +0000 UTC m=+296.244174587" observedRunningTime="2026-03-19 16:45:34.328139342 +0000 UTC m=+297.474196882" watchObservedRunningTime="2026-03-19 16:45:34.330618839 +0000 UTC m=+297.476676379" Mar 19 16:45:34 crc kubenswrapper[4792]: I0319 16:45:34.366616 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:45:35 crc kubenswrapper[4792]: I0319 16:45:35.314647 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ctb" event={"ID":"de7d0c67-0339-42c9-8330-f80dfd39c860","Type":"ContainerStarted","Data":"4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558"} Mar 19 16:45:35 crc kubenswrapper[4792]: I0319 16:45:35.338619 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-25ctb" podStartSLOduration=5.678943328 podStartE2EDuration="59.338589002s" podCreationTimestamp="2026-03-19 16:44:36 +0000 UTC" firstStartedPulling="2026-03-19 16:44:40.816061858 +0000 UTC m=+243.962119398" lastFinishedPulling="2026-03-19 16:45:34.475707532 +0000 UTC m=+297.621765072" observedRunningTime="2026-03-19 16:45:35.33302183 +0000 UTC m=+298.479079380" watchObservedRunningTime="2026-03-19 16:45:35.338589002 +0000 UTC m=+298.484646582" Mar 19 16:45:35 crc kubenswrapper[4792]: I0319 16:45:35.392902 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" podUID="967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" containerName="oauth-openshift" containerID="cri-o://71a19a20836a76c9b37c685b28611110d253bb69c4e3a0abbec524f4187f47ce" gracePeriod=15 Mar 19 16:45:35 crc kubenswrapper[4792]: I0319 16:45:35.586317 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:45:35 crc kubenswrapper[4792]: I0319 16:45:35.586381 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:45:35 crc kubenswrapper[4792]: I0319 16:45:35.654104 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:45:35 crc kubenswrapper[4792]: I0319 16:45:35.975917 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:45:35 crc kubenswrapper[4792]: I0319 16:45:35.976001 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.030905 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.322459 4792 generic.go:334] "Generic (PLEG): container finished" podID="967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" containerID="71a19a20836a76c9b37c685b28611110d253bb69c4e3a0abbec524f4187f47ce" exitCode=0 Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.322498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" event={"ID":"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794","Type":"ContainerDied","Data":"71a19a20836a76c9b37c685b28611110d253bb69c4e3a0abbec524f4187f47ce"} Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.368639 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.550324 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzjbz"] Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.550775 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzjbz" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerName="registry-server" containerID="cri-o://957009c6a72c10a859d258ee406b298f882aab30ce5b91e8edd8e4e76ba86337" gracePeriod=2 Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.582266 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.582312 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.924545 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.991501 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:45:36 crc kubenswrapper[4792]: I0319 16:45:36.991593 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017653 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-session\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017699 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-error\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017732 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-policies\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-login\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017811 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-idp-0-file-data\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017827 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-ocp-branding-template\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017871 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-provider-selection\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017905 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-serving-cert\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017939 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-trusted-ca-bundle\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017955 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-service-ca\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.017974 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-cliconfig\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.018034 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwl5m\" (UniqueName: \"kubernetes.io/projected/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-kube-api-access-dwl5m\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.018054 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-dir\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.018084 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-router-certs\") pod \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\" (UID: \"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.018802 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.019341 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.019740 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.020107 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.023361 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-kube-api-access-dwl5m" (OuterVolumeSpecName: "kube-api-access-dwl5m") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "kube-api-access-dwl5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.023620 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.023805 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.034129 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.034210 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.043207 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.043463 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.043549 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.043642 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.043987 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" (UID: "967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.051993 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119713 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119745 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119759 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119774 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119787 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119801 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119810 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119819 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119829 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119854 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwl5m\" (UniqueName: \"kubernetes.io/projected/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-kube-api-access-dwl5m\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119864 4792 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119871 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119881 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.119889 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.332168 4792 generic.go:334] "Generic (PLEG): container finished" podID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerID="957009c6a72c10a859d258ee406b298f882aab30ce5b91e8edd8e4e76ba86337" exitCode=0 Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.332252 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzjbz" event={"ID":"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2","Type":"ContainerDied","Data":"957009c6a72c10a859d258ee406b298f882aab30ce5b91e8edd8e4e76ba86337"} Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.334463 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" event={"ID":"967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794","Type":"ContainerDied","Data":"203253d1144db133ca0303855d79494ba1a15b21a9907c8cdda1a9412cdb7795"} Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.334534 4792 scope.go:117] "RemoveContainer" containerID="71a19a20836a76c9b37c685b28611110d253bb69c4e3a0abbec524f4187f47ce" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.334696 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2t8f8" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.369257 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2t8f8"] Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.373613 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2t8f8"] Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.377563 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.622547 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-25ctb" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="registry-server" probeResult="failure" output=< Mar 19 16:45:37 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 16:45:37 crc kubenswrapper[4792]: > Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.701399 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.747937 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" path="/var/lib/kubelet/pods/967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794/volumes" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.829495 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-778j2\" (UniqueName: \"kubernetes.io/projected/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-kube-api-access-778j2\") pod \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.829641 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-utilities\") pod \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.829693 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-catalog-content\") pod \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\" (UID: \"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2\") " Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.830731 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-utilities" (OuterVolumeSpecName: "utilities") pod "ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" (UID: "ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.836625 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-kube-api-access-778j2" (OuterVolumeSpecName: "kube-api-access-778j2") pod "ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" (UID: "ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2"). InnerVolumeSpecName "kube-api-access-778j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.911958 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" (UID: "ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.930788 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.930813 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-778j2\" (UniqueName: \"kubernetes.io/projected/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-kube-api-access-778j2\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:37 crc kubenswrapper[4792]: I0319 16:45:37.930826 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:38 crc kubenswrapper[4792]: I0319 16:45:38.342698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzjbz" event={"ID":"ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2","Type":"ContainerDied","Data":"409a2c3aa92cd2df833a54864981bd7ffa6cb3bfd80c2f9b8aa688c1c3611844"} Mar 19 16:45:38 crc kubenswrapper[4792]: I0319 16:45:38.342760 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzjbz" Mar 19 16:45:38 crc kubenswrapper[4792]: I0319 16:45:38.343477 4792 scope.go:117] "RemoveContainer" containerID="957009c6a72c10a859d258ee406b298f882aab30ce5b91e8edd8e4e76ba86337" Mar 19 16:45:38 crc kubenswrapper[4792]: I0319 16:45:38.372179 4792 scope.go:117] "RemoveContainer" containerID="d00d17bbbeaa0191d122e2f8b1544420736dcfbfc63faeed4dd74ac9d1b1b38d" Mar 19 16:45:38 crc kubenswrapper[4792]: I0319 16:45:38.380602 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzjbz"] Mar 19 16:45:38 crc kubenswrapper[4792]: I0319 16:45:38.390479 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzjbz"] Mar 19 16:45:38 crc kubenswrapper[4792]: I0319 16:45:38.412992 4792 scope.go:117] "RemoveContainer" containerID="64f45b2d8c3b6184e917689f85df32d0ef01e41aa76e8474cfe55281eb50ea6c" Mar 19 16:45:39 crc kubenswrapper[4792]: I0319 16:45:39.746062 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" path="/var/lib/kubelet/pods/ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2/volumes" Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.153217 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhxns"] Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.153431 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dhxns" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerName="registry-server" containerID="cri-o://17ee5a98f9cfdcefe60c3e560ba7c8a48959517fea02779e6883935eaeae2874" gracePeriod=2 Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.362604 4792 generic.go:334] "Generic (PLEG): container finished" podID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerID="17ee5a98f9cfdcefe60c3e560ba7c8a48959517fea02779e6883935eaeae2874" exitCode=0 Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.362658 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhxns" event={"ID":"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb","Type":"ContainerDied","Data":"17ee5a98f9cfdcefe60c3e560ba7c8a48959517fea02779e6883935eaeae2874"} Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.644977 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.780401 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-utilities\") pod \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.780496 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-catalog-content\") pod \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.780587 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqvhq\" (UniqueName: \"kubernetes.io/projected/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-kube-api-access-dqvhq\") pod \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\" (UID: \"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb\") " Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.781915 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-utilities" (OuterVolumeSpecName: "utilities") pod "e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" (UID: "e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.787710 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-kube-api-access-dqvhq" (OuterVolumeSpecName: "kube-api-access-dqvhq") pod "e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" (UID: "e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb"). InnerVolumeSpecName "kube-api-access-dqvhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.881778 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqvhq\" (UniqueName: \"kubernetes.io/projected/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-kube-api-access-dqvhq\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.881813 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.944833 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" (UID: "e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:45:41 crc kubenswrapper[4792]: I0319 16:45:41.982971 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:42 crc kubenswrapper[4792]: I0319 16:45:42.371940 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhxns" event={"ID":"e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb","Type":"ContainerDied","Data":"6602a973e131b13b73f020836b0afc0cd93c3da60a393eca3db4de2ca726f55a"} Mar 19 16:45:42 crc kubenswrapper[4792]: I0319 16:45:42.372386 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhxns" Mar 19 16:45:42 crc kubenswrapper[4792]: I0319 16:45:42.372542 4792 scope.go:117] "RemoveContainer" containerID="17ee5a98f9cfdcefe60c3e560ba7c8a48959517fea02779e6883935eaeae2874" Mar 19 16:45:42 crc kubenswrapper[4792]: I0319 16:45:42.412016 4792 scope.go:117] "RemoveContainer" containerID="3d50d30b3bbe8d0673b373d3f42dafdcd6de9a4f9ca0dc4f1fc6b5f87c70a2dd" Mar 19 16:45:42 crc kubenswrapper[4792]: I0319 16:45:42.416361 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhxns"] Mar 19 16:45:42 crc kubenswrapper[4792]: I0319 16:45:42.421817 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dhxns"] Mar 19 16:45:42 crc kubenswrapper[4792]: I0319 16:45:42.437092 4792 scope.go:117] "RemoveContainer" containerID="42e34f7d42bedf62dc4e009080f8131952f4defcdd16073100bcf082dee371dd" Mar 19 16:45:43 crc kubenswrapper[4792]: I0319 16:45:43.419245 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:45:43 crc kubenswrapper[4792]: I0319 16:45:43.419296 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:45:43 crc kubenswrapper[4792]: I0319 16:45:43.462296 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:45:43 crc kubenswrapper[4792]: I0319 16:45:43.747346 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" path="/var/lib/kubelet/pods/e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb/volumes" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.018125 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.428099 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.923875 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-stv4d"] Mar 19 16:45:44 crc kubenswrapper[4792]: E0319 16:45:44.924103 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerName="extract-utilities" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924117 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerName="extract-utilities" Mar 19 16:45:44 crc kubenswrapper[4792]: E0319 16:45:44.924129 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerName="extract-content" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924137 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerName="extract-content" Mar 19 16:45:44 crc kubenswrapper[4792]: E0319 16:45:44.924145 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerName="extract-utilities" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924153 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerName="extract-utilities" Mar 19 16:45:44 crc kubenswrapper[4792]: E0319 16:45:44.924164 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" containerName="oauth-openshift" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924170 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" containerName="oauth-openshift" Mar 19 16:45:44 crc kubenswrapper[4792]: E0319 16:45:44.924183 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerName="extract-content" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924190 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerName="extract-content" Mar 19 16:45:44 crc kubenswrapper[4792]: E0319 16:45:44.924204 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerName="registry-server" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924214 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerName="registry-server" Mar 19 16:45:44 crc kubenswrapper[4792]: E0319 16:45:44.924230 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerName="registry-server" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924239 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerName="registry-server" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924354 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="967fbdbc-6ad0-4bd5-ad22-6caa8ae6a794" containerName="oauth-openshift" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924369 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47e5f8d-1cf3-4ae5-bc04-e93c8d612aeb" containerName="registry-server" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924383 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca871067-aaf0-4f1a-bc9e-29dabe8f1bb2" containerName="registry-server" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.924780 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.926517 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.927128 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.927372 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.927732 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.927750 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.927952 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.928242 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.928834 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.928961 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.929402 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.929582 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.930228 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.935615 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.939118 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.942697 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 19 16:45:44 crc kubenswrapper[4792]: I0319 16:45:44.976557 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-stv4d"] Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026534 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmzmv\" (UniqueName: \"kubernetes.io/projected/14d78136-a62d-4252-adf4-f9830e9fe8c1-kube-api-access-wmzmv\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026590 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14d78136-a62d-4252-adf4-f9830e9fe8c1-audit-dir\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026612 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026630 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026660 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-audit-policies\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026773 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026875 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026965 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.026989 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.027100 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.027159 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.027197 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.027234 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.027257 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128070 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128095 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128116 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128142 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128167 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128187 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128224 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmzmv\" (UniqueName: \"kubernetes.io/projected/14d78136-a62d-4252-adf4-f9830e9fe8c1-kube-api-access-wmzmv\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128249 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14d78136-a62d-4252-adf4-f9830e9fe8c1-audit-dir\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128276 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128299 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128332 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-audit-policies\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128356 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128366 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14d78136-a62d-4252-adf4-f9830e9fe8c1-audit-dir\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.128382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.129075 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.129087 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.129117 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.129270 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14d78136-a62d-4252-adf4-f9830e9fe8c1-audit-policies\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.133632 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.133708 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.135020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.135126 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.135191 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.135194 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.135450 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.140391 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/14d78136-a62d-4252-adf4-f9830e9fe8c1-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.144456 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmzmv\" (UniqueName: \"kubernetes.io/projected/14d78136-a62d-4252-adf4-f9830e9fe8c1-kube-api-access-wmzmv\") pod \"oauth-openshift-65556786d7-stv4d\" (UID: \"14d78136-a62d-4252-adf4-f9830e9fe8c1\") " pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.240402 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.428093 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6778d545d-69zk9"] Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.428612 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" podUID="698b2fef-7717-48bc-850e-8e03b673750c" containerName="controller-manager" containerID="cri-o://1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f" gracePeriod=30 Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.474154 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95"] Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.474504 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" podUID="8227d354-6c6a-4ccb-8a38-c2b3f794421e" containerName="route-controller-manager" containerID="cri-o://dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53" gracePeriod=30 Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.654115 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-stv4d"] Mar 19 16:45:45 crc kubenswrapper[4792]: W0319 16:45:45.664085 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14d78136_a62d_4252_adf4_f9830e9fe8c1.slice/crio-1345830302bdee5f31f982d19c4acdbcf61420d8614cbdfde6dd4c3bd286fb90 WatchSource:0}: Error finding container 1345830302bdee5f31f982d19c4acdbcf61420d8614cbdfde6dd4c3bd286fb90: Status 404 returned error can't find the container with id 1345830302bdee5f31f982d19c4acdbcf61420d8614cbdfde6dd4c3bd286fb90 Mar 19 16:45:45 crc kubenswrapper[4792]: I0319 16:45:45.927469 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.009413 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.014576 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.039458 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-client-ca\") pod \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.039527 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-config\") pod \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.039619 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d354-6c6a-4ccb-8a38-c2b3f794421e-serving-cert\") pod \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.039646 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxcbs\" (UniqueName: \"kubernetes.io/projected/8227d354-6c6a-4ccb-8a38-c2b3f794421e-kube-api-access-lxcbs\") pod \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\" (UID: \"8227d354-6c6a-4ccb-8a38-c2b3f794421e\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.044242 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-client-ca" (OuterVolumeSpecName: "client-ca") pod "8227d354-6c6a-4ccb-8a38-c2b3f794421e" (UID: "8227d354-6c6a-4ccb-8a38-c2b3f794421e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.044346 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-config" (OuterVolumeSpecName: "config") pod "8227d354-6c6a-4ccb-8a38-c2b3f794421e" (UID: "8227d354-6c6a-4ccb-8a38-c2b3f794421e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.048008 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8227d354-6c6a-4ccb-8a38-c2b3f794421e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8227d354-6c6a-4ccb-8a38-c2b3f794421e" (UID: "8227d354-6c6a-4ccb-8a38-c2b3f794421e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.048204 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8227d354-6c6a-4ccb-8a38-c2b3f794421e-kube-api-access-lxcbs" (OuterVolumeSpecName: "kube-api-access-lxcbs") pod "8227d354-6c6a-4ccb-8a38-c2b3f794421e" (UID: "8227d354-6c6a-4ccb-8a38-c2b3f794421e"). InnerVolumeSpecName "kube-api-access-lxcbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.143708 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/698b2fef-7717-48bc-850e-8e03b673750c-serving-cert\") pod \"698b2fef-7717-48bc-850e-8e03b673750c\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.143809 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-config\") pod \"698b2fef-7717-48bc-850e-8e03b673750c\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.143923 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5w22\" (UniqueName: \"kubernetes.io/projected/698b2fef-7717-48bc-850e-8e03b673750c-kube-api-access-b5w22\") pod \"698b2fef-7717-48bc-850e-8e03b673750c\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.143942 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-client-ca\") pod \"698b2fef-7717-48bc-850e-8e03b673750c\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.143956 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-proxy-ca-bundles\") pod \"698b2fef-7717-48bc-850e-8e03b673750c\" (UID: \"698b2fef-7717-48bc-850e-8e03b673750c\") " Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.144597 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-client-ca" (OuterVolumeSpecName: "client-ca") pod "698b2fef-7717-48bc-850e-8e03b673750c" (UID: "698b2fef-7717-48bc-850e-8e03b673750c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.144814 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "698b2fef-7717-48bc-850e-8e03b673750c" (UID: "698b2fef-7717-48bc-850e-8e03b673750c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.144929 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-config" (OuterVolumeSpecName: "config") pod "698b2fef-7717-48bc-850e-8e03b673750c" (UID: "698b2fef-7717-48bc-850e-8e03b673750c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.145048 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.145064 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.145072 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d354-6c6a-4ccb-8a38-c2b3f794421e-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.145079 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-client-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.145087 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/698b2fef-7717-48bc-850e-8e03b673750c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.145118 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d354-6c6a-4ccb-8a38-c2b3f794421e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.145127 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxcbs\" (UniqueName: \"kubernetes.io/projected/8227d354-6c6a-4ccb-8a38-c2b3f794421e-kube-api-access-lxcbs\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.146623 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/698b2fef-7717-48bc-850e-8e03b673750c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "698b2fef-7717-48bc-850e-8e03b673750c" (UID: "698b2fef-7717-48bc-850e-8e03b673750c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.147710 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698b2fef-7717-48bc-850e-8e03b673750c-kube-api-access-b5w22" (OuterVolumeSpecName: "kube-api-access-b5w22") pod "698b2fef-7717-48bc-850e-8e03b673750c" (UID: "698b2fef-7717-48bc-850e-8e03b673750c"). InnerVolumeSpecName "kube-api-access-b5w22". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.246569 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5w22\" (UniqueName: \"kubernetes.io/projected/698b2fef-7717-48bc-850e-8e03b673750c-kube-api-access-b5w22\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.246604 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/698b2fef-7717-48bc-850e-8e03b673750c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.413501 4792 generic.go:334] "Generic (PLEG): container finished" podID="698b2fef-7717-48bc-850e-8e03b673750c" containerID="1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f" exitCode=0 Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.413584 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" event={"ID":"698b2fef-7717-48bc-850e-8e03b673750c","Type":"ContainerDied","Data":"1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f"} Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.413618 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" event={"ID":"698b2fef-7717-48bc-850e-8e03b673750c","Type":"ContainerDied","Data":"b0f26ceb8db00e1af391f7d963694c4a79a45c5a379db5cea0331206ce1708b9"} Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.413639 4792 scope.go:117] "RemoveContainer" containerID="1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.414231 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6778d545d-69zk9" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.415674 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" event={"ID":"14d78136-a62d-4252-adf4-f9830e9fe8c1","Type":"ContainerStarted","Data":"57686d57dced50c52b9d5d3436c0604f76a4afec7189e37e0899020bfe2dc486"} Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.415728 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" event={"ID":"14d78136-a62d-4252-adf4-f9830e9fe8c1","Type":"ContainerStarted","Data":"1345830302bdee5f31f982d19c4acdbcf61420d8614cbdfde6dd4c3bd286fb90"} Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.416353 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.429022 4792 generic.go:334] "Generic (PLEG): container finished" podID="8227d354-6c6a-4ccb-8a38-c2b3f794421e" containerID="dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53" exitCode=0 Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.429105 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" event={"ID":"8227d354-6c6a-4ccb-8a38-c2b3f794421e","Type":"ContainerDied","Data":"dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53"} Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.429110 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.429152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95" event={"ID":"8227d354-6c6a-4ccb-8a38-c2b3f794421e","Type":"ContainerDied","Data":"2589741f428846bcdccc8eaa66405e33bee88ca48e3054b8cc683003c023a9a1"} Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.442884 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podStartSLOduration=36.442856222 podStartE2EDuration="36.442856222s" podCreationTimestamp="2026-03-19 16:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:46.436426196 +0000 UTC m=+309.582483736" watchObservedRunningTime="2026-03-19 16:45:46.442856222 +0000 UTC m=+309.588913762" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.444907 4792 scope.go:117] "RemoveContainer" containerID="1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f" Mar 19 16:45:46 crc kubenswrapper[4792]: E0319 16:45:46.446611 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f\": container with ID starting with 1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f not found: ID does not exist" containerID="1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.446713 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f"} err="failed to get container status \"1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f\": rpc error: code = NotFound desc = could not find container \"1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f\": container with ID starting with 1a45d8cc94c083e1c6118917bdb8c358fda6f22b39f11a3daea505f382e6907f not found: ID does not exist" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.446814 4792 scope.go:117] "RemoveContainer" containerID="dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.460643 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6778d545d-69zk9"] Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.464919 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6778d545d-69zk9"] Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.471577 4792 scope.go:117] "RemoveContainer" containerID="dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53" Mar 19 16:45:46 crc kubenswrapper[4792]: E0319 16:45:46.472434 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53\": container with ID starting with dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53 not found: ID does not exist" containerID="dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.472466 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53"} err="failed to get container status \"dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53\": rpc error: code = NotFound desc = could not find container \"dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53\": container with ID starting with dace6c3955020a5ca36293f84a6199b7fab247e9b2910c51def10100fb8e5b53 not found: ID does not exist" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.493438 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95"] Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.497936 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d564d898c-2kv95"] Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.564607 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.634570 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.675283 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.925703 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5"] Mar 19 16:45:46 crc kubenswrapper[4792]: E0319 16:45:46.926028 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8227d354-6c6a-4ccb-8a38-c2b3f794421e" containerName="route-controller-manager" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.926045 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8227d354-6c6a-4ccb-8a38-c2b3f794421e" containerName="route-controller-manager" Mar 19 16:45:46 crc kubenswrapper[4792]: E0319 16:45:46.926062 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698b2fef-7717-48bc-850e-8e03b673750c" containerName="controller-manager" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.926073 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="698b2fef-7717-48bc-850e-8e03b673750c" containerName="controller-manager" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.926204 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8227d354-6c6a-4ccb-8a38-c2b3f794421e" containerName="route-controller-manager" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.926217 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="698b2fef-7717-48bc-850e-8e03b673750c" containerName="controller-manager" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.926638 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.929177 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.931430 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.931750 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.931980 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.932022 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.932165 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.935228 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9c7bf785c-5ptd8"] Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.936058 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.938750 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.939036 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9c7bf785c-5ptd8"] Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.939396 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.940903 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.941171 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.942367 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.942883 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.944011 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5"] Mar 19 16:45:46 crc kubenswrapper[4792]: I0319 16:45:46.948612 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.058332 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-client-ca\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.059101 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-proxy-ca-bundles\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.059268 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7chr\" (UniqueName: \"kubernetes.io/projected/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-kube-api-access-b7chr\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.059348 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-serving-cert\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.059527 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359345fa-dd3f-4812-9760-7eb10d601634-serving-cert\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.059558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62ct2\" (UniqueName: \"kubernetes.io/projected/359345fa-dd3f-4812-9760-7eb10d601634-kube-api-access-62ct2\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.059589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359345fa-dd3f-4812-9760-7eb10d601634-config\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.059631 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-config\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.059648 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/359345fa-dd3f-4812-9760-7eb10d601634-client-ca\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.150724 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fd9rl"] Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.150977 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fd9rl" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerName="registry-server" containerID="cri-o://221dea5054dcc2c2a028bb8abcfe4a44152940c1deff507ac2dc02bbd84ea42c" gracePeriod=2 Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.160910 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359345fa-dd3f-4812-9760-7eb10d601634-serving-cert\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.160951 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62ct2\" (UniqueName: \"kubernetes.io/projected/359345fa-dd3f-4812-9760-7eb10d601634-kube-api-access-62ct2\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.160975 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359345fa-dd3f-4812-9760-7eb10d601634-config\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.161000 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-config\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.161018 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/359345fa-dd3f-4812-9760-7eb10d601634-client-ca\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.161047 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-client-ca\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.161072 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-proxy-ca-bundles\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.161091 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7chr\" (UniqueName: \"kubernetes.io/projected/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-kube-api-access-b7chr\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.161110 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-serving-cert\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.162946 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/359345fa-dd3f-4812-9760-7eb10d601634-client-ca\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.163395 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-client-ca\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.163610 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-config\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.164340 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359345fa-dd3f-4812-9760-7eb10d601634-config\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.164633 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-proxy-ca-bundles\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.165652 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359345fa-dd3f-4812-9760-7eb10d601634-serving-cert\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.167342 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-serving-cert\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.177761 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62ct2\" (UniqueName: \"kubernetes.io/projected/359345fa-dd3f-4812-9760-7eb10d601634-kube-api-access-62ct2\") pod \"route-controller-manager-65478b57cc-lltk5\" (UID: \"359345fa-dd3f-4812-9760-7eb10d601634\") " pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.180221 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7chr\" (UniqueName: \"kubernetes.io/projected/5575e5d6-2fee-4709-8eb9-7b3bff5c7563-kube-api-access-b7chr\") pod \"controller-manager-9c7bf785c-5ptd8\" (UID: \"5575e5d6-2fee-4709-8eb9-7b3bff5c7563\") " pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.279925 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.285374 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.451459 4792 generic.go:334] "Generic (PLEG): container finished" podID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerID="221dea5054dcc2c2a028bb8abcfe4a44152940c1deff507ac2dc02bbd84ea42c" exitCode=0 Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.451523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd9rl" event={"ID":"39daf6b3-68ce-429a-b454-1a07c6706a9e","Type":"ContainerDied","Data":"221dea5054dcc2c2a028bb8abcfe4a44152940c1deff507ac2dc02bbd84ea42c"} Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.541637 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.668469 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-utilities\") pod \"39daf6b3-68ce-429a-b454-1a07c6706a9e\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.668574 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-catalog-content\") pod \"39daf6b3-68ce-429a-b454-1a07c6706a9e\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.669684 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-utilities" (OuterVolumeSpecName: "utilities") pod "39daf6b3-68ce-429a-b454-1a07c6706a9e" (UID: "39daf6b3-68ce-429a-b454-1a07c6706a9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.669963 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj452\" (UniqueName: \"kubernetes.io/projected/39daf6b3-68ce-429a-b454-1a07c6706a9e-kube-api-access-kj452\") pod \"39daf6b3-68ce-429a-b454-1a07c6706a9e\" (UID: \"39daf6b3-68ce-429a-b454-1a07c6706a9e\") " Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.670392 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.674121 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39daf6b3-68ce-429a-b454-1a07c6706a9e-kube-api-access-kj452" (OuterVolumeSpecName: "kube-api-access-kj452") pod "39daf6b3-68ce-429a-b454-1a07c6706a9e" (UID: "39daf6b3-68ce-429a-b454-1a07c6706a9e"). InnerVolumeSpecName "kube-api-access-kj452". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.717875 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5"] Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.721731 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9c7bf785c-5ptd8"] Mar 19 16:45:47 crc kubenswrapper[4792]: W0319 16:45:47.726033 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359345fa_dd3f_4812_9760_7eb10d601634.slice/crio-c050b7f9609f288a511b7680b717a1ae80960401b6983d428db16ad302c88f39 WatchSource:0}: Error finding container c050b7f9609f288a511b7680b717a1ae80960401b6983d428db16ad302c88f39: Status 404 returned error can't find the container with id c050b7f9609f288a511b7680b717a1ae80960401b6983d428db16ad302c88f39 Mar 19 16:45:47 crc kubenswrapper[4792]: W0319 16:45:47.726226 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5575e5d6_2fee_4709_8eb9_7b3bff5c7563.slice/crio-7e8bcad63fb71b892aad3da4158874010ca4fe76b6a578b1d914cd369ef5681f WatchSource:0}: Error finding container 7e8bcad63fb71b892aad3da4158874010ca4fe76b6a578b1d914cd369ef5681f: Status 404 returned error can't find the container with id 7e8bcad63fb71b892aad3da4158874010ca4fe76b6a578b1d914cd369ef5681f Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.729472 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39daf6b3-68ce-429a-b454-1a07c6706a9e" (UID: "39daf6b3-68ce-429a-b454-1a07c6706a9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.750382 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="698b2fef-7717-48bc-850e-8e03b673750c" path="/var/lib/kubelet/pods/698b2fef-7717-48bc-850e-8e03b673750c/volumes" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.751360 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8227d354-6c6a-4ccb-8a38-c2b3f794421e" path="/var/lib/kubelet/pods/8227d354-6c6a-4ccb-8a38-c2b3f794421e/volumes" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.771905 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39daf6b3-68ce-429a-b454-1a07c6706a9e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:47 crc kubenswrapper[4792]: I0319 16:45:47.771934 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj452\" (UniqueName: \"kubernetes.io/projected/39daf6b3-68ce-429a-b454-1a07c6706a9e-kube-api-access-kj452\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.467126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd9rl" event={"ID":"39daf6b3-68ce-429a-b454-1a07c6706a9e","Type":"ContainerDied","Data":"04285f2e5a1d73a0d0ed0606044d2a1bf02b469bbc747a90543ba80ccf758a3c"} Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.467175 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd9rl" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.467220 4792 scope.go:117] "RemoveContainer" containerID="221dea5054dcc2c2a028bb8abcfe4a44152940c1deff507ac2dc02bbd84ea42c" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.469258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" event={"ID":"5575e5d6-2fee-4709-8eb9-7b3bff5c7563","Type":"ContainerStarted","Data":"7416e76ed8d24a2ee7ad89edf653f9bc75af88b082e61ccc27b8a92bf90841a6"} Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.469553 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.469570 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" event={"ID":"5575e5d6-2fee-4709-8eb9-7b3bff5c7563","Type":"ContainerStarted","Data":"7e8bcad63fb71b892aad3da4158874010ca4fe76b6a578b1d914cd369ef5681f"} Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.472154 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" event={"ID":"359345fa-dd3f-4812-9760-7eb10d601634","Type":"ContainerStarted","Data":"46a9f60d6a4266af70cd825c3c38a2ff12759f05154eebe0e4d2afc81f0ead8c"} Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.472271 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" event={"ID":"359345fa-dd3f-4812-9760-7eb10d601634","Type":"ContainerStarted","Data":"c050b7f9609f288a511b7680b717a1ae80960401b6983d428db16ad302c88f39"} Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.472502 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.474833 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.477096 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.482040 4792 scope.go:117] "RemoveContainer" containerID="3ccba9e077edf505dd17e05da375461cc1ac536223478483b04486b2602f4fc5" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.495929 4792 scope.go:117] "RemoveContainer" containerID="22fff4b51c89d56ee537adfe1c80d0b26f6656e28e241a522e908e0eeb940037" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.510708 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podStartSLOduration=3.51068857 podStartE2EDuration="3.51068857s" podCreationTimestamp="2026-03-19 16:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:48.495542165 +0000 UTC m=+311.641599705" watchObservedRunningTime="2026-03-19 16:45:48.51068857 +0000 UTC m=+311.656746110" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.511044 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podStartSLOduration=3.511038239 podStartE2EDuration="3.511038239s" podCreationTimestamp="2026-03-19 16:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:45:48.507231635 +0000 UTC m=+311.653289175" watchObservedRunningTime="2026-03-19 16:45:48.511038239 +0000 UTC m=+311.657095779" Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.521720 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fd9rl"] Mar 19 16:45:48 crc kubenswrapper[4792]: I0319 16:45:48.525051 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fd9rl"] Mar 19 16:45:49 crc kubenswrapper[4792]: I0319 16:45:49.551322 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4272"] Mar 19 16:45:49 crc kubenswrapper[4792]: I0319 16:45:49.551596 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r4272" podUID="2e59df38-8404-4664-96d7-481e34988bee" containerName="registry-server" containerID="cri-o://2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385" gracePeriod=2 Mar 19 16:45:49 crc kubenswrapper[4792]: I0319 16:45:49.750430 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" path="/var/lib/kubelet/pods/39daf6b3-68ce-429a-b454-1a07c6706a9e/volumes" Mar 19 16:45:49 crc kubenswrapper[4792]: I0319 16:45:49.934152 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.103564 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-catalog-content\") pod \"2e59df38-8404-4664-96d7-481e34988bee\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.103612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7j6\" (UniqueName: \"kubernetes.io/projected/2e59df38-8404-4664-96d7-481e34988bee-kube-api-access-xh7j6\") pod \"2e59df38-8404-4664-96d7-481e34988bee\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.103675 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-utilities\") pod \"2e59df38-8404-4664-96d7-481e34988bee\" (UID: \"2e59df38-8404-4664-96d7-481e34988bee\") " Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.104558 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-utilities" (OuterVolumeSpecName: "utilities") pod "2e59df38-8404-4664-96d7-481e34988bee" (UID: "2e59df38-8404-4664-96d7-481e34988bee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.111255 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e59df38-8404-4664-96d7-481e34988bee-kube-api-access-xh7j6" (OuterVolumeSpecName: "kube-api-access-xh7j6") pod "2e59df38-8404-4664-96d7-481e34988bee" (UID: "2e59df38-8404-4664-96d7-481e34988bee"). InnerVolumeSpecName "kube-api-access-xh7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.130580 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e59df38-8404-4664-96d7-481e34988bee" (UID: "2e59df38-8404-4664-96d7-481e34988bee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.204751 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.204782 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e59df38-8404-4664-96d7-481e34988bee-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.204792 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7j6\" (UniqueName: \"kubernetes.io/projected/2e59df38-8404-4664-96d7-481e34988bee-kube-api-access-xh7j6\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.484901 4792 generic.go:334] "Generic (PLEG): container finished" podID="2e59df38-8404-4664-96d7-481e34988bee" containerID="2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385" exitCode=0 Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.485023 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4272" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.484970 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4272" event={"ID":"2e59df38-8404-4664-96d7-481e34988bee","Type":"ContainerDied","Data":"2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385"} Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.485086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4272" event={"ID":"2e59df38-8404-4664-96d7-481e34988bee","Type":"ContainerDied","Data":"0d1944fcb5fdfcc947a04eae473eea98adc1708cbb74da6773244bfdb9f25c0b"} Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.485123 4792 scope.go:117] "RemoveContainer" containerID="2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.508964 4792 scope.go:117] "RemoveContainer" containerID="6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.509526 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4272"] Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.519768 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4272"] Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.534278 4792 scope.go:117] "RemoveContainer" containerID="42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.549060 4792 scope.go:117] "RemoveContainer" containerID="2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385" Mar 19 16:45:50 crc kubenswrapper[4792]: E0319 16:45:50.549416 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385\": container with ID starting with 2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385 not found: ID does not exist" containerID="2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.549459 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385"} err="failed to get container status \"2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385\": rpc error: code = NotFound desc = could not find container \"2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385\": container with ID starting with 2817338daed219e934ceaad810d61c171972c6ddf8b3ce84ef2a55ce9e157385 not found: ID does not exist" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.549481 4792 scope.go:117] "RemoveContainer" containerID="6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d" Mar 19 16:45:50 crc kubenswrapper[4792]: E0319 16:45:50.549887 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d\": container with ID starting with 6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d not found: ID does not exist" containerID="6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.549928 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d"} err="failed to get container status \"6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d\": rpc error: code = NotFound desc = could not find container \"6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d\": container with ID starting with 6da4082310845fc0c7d91dc08ba3c2ae486c0737b04a33b486cdb597f475433d not found: ID does not exist" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.549959 4792 scope.go:117] "RemoveContainer" containerID="42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b" Mar 19 16:45:50 crc kubenswrapper[4792]: E0319 16:45:50.550237 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b\": container with ID starting with 42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b not found: ID does not exist" containerID="42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b" Mar 19 16:45:50 crc kubenswrapper[4792]: I0319 16:45:50.550262 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b"} err="failed to get container status \"42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b\": rpc error: code = NotFound desc = could not find container \"42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b\": container with ID starting with 42505a4947d1ae75bd482c736672d080dadfd6f032a87e7f91509cdf7bf7fb0b not found: ID does not exist" Mar 19 16:45:51 crc kubenswrapper[4792]: I0319 16:45:51.746346 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e59df38-8404-4664-96d7-481e34988bee" path="/var/lib/kubelet/pods/2e59df38-8404-4664-96d7-481e34988bee/volumes" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.834668 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.835521 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerName="extract-content" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.835542 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerName="extract-content" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.835565 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e59df38-8404-4664-96d7-481e34988bee" containerName="extract-content" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.835573 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e59df38-8404-4664-96d7-481e34988bee" containerName="extract-content" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.835590 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e59df38-8404-4664-96d7-481e34988bee" containerName="extract-utilities" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.835598 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e59df38-8404-4664-96d7-481e34988bee" containerName="extract-utilities" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.835612 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e59df38-8404-4664-96d7-481e34988bee" containerName="registry-server" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.835634 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e59df38-8404-4664-96d7-481e34988bee" containerName="registry-server" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.835643 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerName="extract-utilities" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.835651 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerName="extract-utilities" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.835662 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerName="registry-server" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.835672 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerName="registry-server" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.835812 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e59df38-8404-4664-96d7-481e34988bee" containerName="registry-server" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.835830 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="39daf6b3-68ce-429a-b454-1a07c6706a9e" containerName="registry-server" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.836515 4792 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.836722 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837017 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5" gracePeriod=15 Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837014 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3" gracePeriod=15 Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837055 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52" gracePeriod=15 Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837043 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a" gracePeriod=15 Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837448 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.837700 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837717 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.837730 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837737 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.837751 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837758 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.837771 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837781 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.837790 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837797 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.837805 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837812 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.837825 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837832 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.837858 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837866 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837844 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2" gracePeriod=15 Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837970 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837981 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837989 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.837995 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.838023 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.838032 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.838043 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.838142 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.838149 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: E0319 16:45:53.838161 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.838167 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.838261 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.838431 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.881548 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.961456 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.961507 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.961535 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.961563 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.961589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.961612 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.961642 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:53 crc kubenswrapper[4792]: I0319 16:45:53.961665 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.062766 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.062825 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.062893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.062956 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.062978 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063012 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063043 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063104 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063122 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063217 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063256 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063297 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063175 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.063310 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.173813 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:45:54 crc kubenswrapper[4792]: W0319 16:45:54.193171 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-cd3ac170c201e814b80fb642d36bdd5ce7bb5862e983237a5c27748a9bcda519 WatchSource:0}: Error finding container cd3ac170c201e814b80fb642d36bdd5ce7bb5862e983237a5c27748a9bcda519: Status 404 returned error can't find the container with id cd3ac170c201e814b80fb642d36bdd5ce7bb5862e983237a5c27748a9bcda519 Mar 19 16:45:54 crc kubenswrapper[4792]: E0319 16:45:54.196425 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189e4be5f1244091 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:45:54.195832977 +0000 UTC m=+317.341890517,LastTimestamp:2026-03-19 16:45:54.195832977 +0000 UTC m=+317.341890517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.506645 4792 generic.go:334] "Generic (PLEG): container finished" podID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" containerID="3d889e5ac0d4dcf5b8311946f2dee74ffd9b00280b313a85f79e448519d3fbcf" exitCode=0 Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.506756 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcdf5c25-1486-4e52-ba7a-381ceb4d6521","Type":"ContainerDied","Data":"3d889e5ac0d4dcf5b8311946f2dee74ffd9b00280b313a85f79e448519d3fbcf"} Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.507650 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.507969 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.508343 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.509895 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.511656 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.512412 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3" exitCode=0 Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.512434 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52" exitCode=0 Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.512443 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a" exitCode=0 Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.512452 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5" exitCode=2 Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.512539 4792 scope.go:117] "RemoveContainer" containerID="57f4906fc76d52cc6c4d0d47f5589d4ad6d86313ae194e912e1e6c426a42378a" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.513894 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3"} Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.513925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cd3ac170c201e814b80fb642d36bdd5ce7bb5862e983237a5c27748a9bcda519"} Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.514439 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.514899 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:54 crc kubenswrapper[4792]: I0319 16:45:54.515272 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:55 crc kubenswrapper[4792]: I0319 16:45:55.525382 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 19 16:45:55 crc kubenswrapper[4792]: I0319 16:45:55.988332 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:55 crc kubenswrapper[4792]: I0319 16:45:55.989322 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:55 crc kubenswrapper[4792]: I0319 16:45:55.990561 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.089295 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kube-api-access\") pod \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.089384 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kubelet-dir\") pod \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.089445 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-var-lock\") pod \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\" (UID: \"dcdf5c25-1486-4e52-ba7a-381ceb4d6521\") " Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.089678 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dcdf5c25-1486-4e52-ba7a-381ceb4d6521" (UID: "dcdf5c25-1486-4e52-ba7a-381ceb4d6521"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.089749 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-var-lock" (OuterVolumeSpecName: "var-lock") pod "dcdf5c25-1486-4e52-ba7a-381ceb4d6521" (UID: "dcdf5c25-1486-4e52-ba7a-381ceb4d6521"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.094872 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dcdf5c25-1486-4e52-ba7a-381ceb4d6521" (UID: "dcdf5c25-1486-4e52-ba7a-381ceb4d6521"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.191686 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.191752 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.191776 4792 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdf5c25-1486-4e52-ba7a-381ceb4d6521-var-lock\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.241528 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.242248 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.243048 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.243452 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.243704 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.394282 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.394380 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.394455 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.394442 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.394514 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.394669 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.394982 4792 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.395029 4792 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.395049 4792 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.535086 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.535701 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2" exitCode=0 Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.535773 4792 scope.go:117] "RemoveContainer" containerID="7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.535809 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.537112 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcdf5c25-1486-4e52-ba7a-381ceb4d6521","Type":"ContainerDied","Data":"0eb4fdc8ee6864c891d58a9f3a8acabb023493310560767bf68000aac630f2b3"} Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.537153 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb4fdc8ee6864c891d58a9f3a8acabb023493310560767bf68000aac630f2b3" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.537173 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.553919 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.554454 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.555088 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.557096 4792 scope.go:117] "RemoveContainer" containerID="5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.563508 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.564053 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.564392 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.573941 4792 scope.go:117] "RemoveContainer" containerID="a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.591611 4792 scope.go:117] "RemoveContainer" containerID="366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.620712 4792 scope.go:117] "RemoveContainer" containerID="3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.640832 4792 scope.go:117] "RemoveContainer" containerID="57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.660595 4792 scope.go:117] "RemoveContainer" containerID="7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3" Mar 19 16:45:56 crc kubenswrapper[4792]: E0319 16:45:56.661172 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\": container with ID starting with 7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3 not found: ID does not exist" containerID="7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.661237 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3"} err="failed to get container status \"7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\": rpc error: code = NotFound desc = could not find container \"7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3\": container with ID starting with 7eb811f9b1e689c7970ad9f9b7c34e42a9ac89acc912b86c60a0b9dd82ad53b3 not found: ID does not exist" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.661284 4792 scope.go:117] "RemoveContainer" containerID="5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52" Mar 19 16:45:56 crc kubenswrapper[4792]: E0319 16:45:56.661781 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\": container with ID starting with 5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52 not found: ID does not exist" containerID="5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.661820 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52"} err="failed to get container status \"5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\": rpc error: code = NotFound desc = could not find container \"5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52\": container with ID starting with 5014c389f424ac6e18238eef8cda2ace574e85a2430dba7d984934a5d0478b52 not found: ID does not exist" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.661864 4792 scope.go:117] "RemoveContainer" containerID="a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a" Mar 19 16:45:56 crc kubenswrapper[4792]: E0319 16:45:56.662393 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\": container with ID starting with a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a not found: ID does not exist" containerID="a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.662437 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a"} err="failed to get container status \"a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\": rpc error: code = NotFound desc = could not find container \"a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a\": container with ID starting with a936591610639fbaa9eed49af4d1195ab9b38da6eb7486232b1914ad9594d31a not found: ID does not exist" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.662467 4792 scope.go:117] "RemoveContainer" containerID="366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5" Mar 19 16:45:56 crc kubenswrapper[4792]: E0319 16:45:56.662934 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\": container with ID starting with 366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5 not found: ID does not exist" containerID="366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.662986 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5"} err="failed to get container status \"366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\": rpc error: code = NotFound desc = could not find container \"366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5\": container with ID starting with 366da429e3ee9b127861be1a3e7c4550a49adbd0bd018fc7b9c6229d48b3a9f5 not found: ID does not exist" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.663018 4792 scope.go:117] "RemoveContainer" containerID="3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2" Mar 19 16:45:56 crc kubenswrapper[4792]: E0319 16:45:56.663373 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\": container with ID starting with 3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2 not found: ID does not exist" containerID="3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.663398 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2"} err="failed to get container status \"3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\": rpc error: code = NotFound desc = could not find container \"3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2\": container with ID starting with 3ce6b481c041bd53409946f9724c257aafe9431f819daea241ac26d2aaaf70c2 not found: ID does not exist" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.663416 4792 scope.go:117] "RemoveContainer" containerID="57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c" Mar 19 16:45:56 crc kubenswrapper[4792]: E0319 16:45:56.663781 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\": container with ID starting with 57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c not found: ID does not exist" containerID="57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c" Mar 19 16:45:56 crc kubenswrapper[4792]: I0319 16:45:56.663806 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c"} err="failed to get container status \"57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\": rpc error: code = NotFound desc = could not find container \"57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c\": container with ID starting with 57baf033cc2dd0f4d4984d320706586602debb63c42f276c284d956ca8a27f1c not found: ID does not exist" Mar 19 16:45:56 crc kubenswrapper[4792]: E0319 16:45:56.702272 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189e4be5f1244091 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:45:54.195832977 +0000 UTC m=+317.341890517,LastTimestamp:2026-03-19 16:45:54.195832977 +0000 UTC m=+317.341890517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:45:57 crc kubenswrapper[4792]: I0319 16:45:57.743210 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:57 crc kubenswrapper[4792]: I0319 16:45:57.744545 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:57 crc kubenswrapper[4792]: I0319 16:45:57.745282 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:57 crc kubenswrapper[4792]: I0319 16:45:57.748145 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 19 16:45:59 crc kubenswrapper[4792]: E0319 16:45:59.494168 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:59 crc kubenswrapper[4792]: E0319 16:45:59.494980 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:59 crc kubenswrapper[4792]: E0319 16:45:59.495432 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:59 crc kubenswrapper[4792]: E0319 16:45:59.495932 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:59 crc kubenswrapper[4792]: E0319 16:45:59.496253 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:45:59 crc kubenswrapper[4792]: I0319 16:45:59.496341 4792 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 19 16:45:59 crc kubenswrapper[4792]: E0319 16:45:59.496630 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Mar 19 16:45:59 crc kubenswrapper[4792]: E0319 16:45:59.698268 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Mar 19 16:46:00 crc kubenswrapper[4792]: E0319 16:46:00.099650 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Mar 19 16:46:00 crc kubenswrapper[4792]: E0319 16:46:00.901884 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Mar 19 16:46:02 crc kubenswrapper[4792]: E0319 16:46:02.502512 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Mar 19 16:46:05 crc kubenswrapper[4792]: E0319 16:46:05.704985 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="6.4s" Mar 19 16:46:06 crc kubenswrapper[4792]: I0319 16:46:06.605976 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 19 16:46:06 crc kubenswrapper[4792]: I0319 16:46:06.607817 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 19 16:46:06 crc kubenswrapper[4792]: I0319 16:46:06.607952 4792 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e" exitCode=1 Mar 19 16:46:06 crc kubenswrapper[4792]: I0319 16:46:06.608042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e"} Mar 19 16:46:06 crc kubenswrapper[4792]: I0319 16:46:06.609311 4792 scope.go:117] "RemoveContainer" containerID="8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e" Mar 19 16:46:06 crc kubenswrapper[4792]: I0319 16:46:06.609500 4792 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:06 crc kubenswrapper[4792]: I0319 16:46:06.609873 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:06 crc kubenswrapper[4792]: I0319 16:46:06.610295 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:06 crc kubenswrapper[4792]: E0319 16:46:06.704404 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189e4be5f1244091 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-19 16:45:54.195832977 +0000 UTC m=+317.341890517,LastTimestamp:2026-03-19 16:45:54.195832977 +0000 UTC m=+317.341890517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.620757 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.621680 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.621735 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f83288cbf8ac45a3a8fe54cadc0e821234a7bbb9411a47fabdad0cc9d54c1332"} Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.623905 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.624893 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.625502 4792 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.744473 4792 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.745021 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:07 crc kubenswrapper[4792]: I0319 16:46:07.745469 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:07 crc kubenswrapper[4792]: E0319 16:46:07.761551 4792 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" volumeName="registry-storage" Mar 19 16:46:08 crc kubenswrapper[4792]: I0319 16:46:08.739424 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:08 crc kubenswrapper[4792]: I0319 16:46:08.740687 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:08 crc kubenswrapper[4792]: I0319 16:46:08.741444 4792 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:08 crc kubenswrapper[4792]: I0319 16:46:08.742204 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:08 crc kubenswrapper[4792]: I0319 16:46:08.752200 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:08 crc kubenswrapper[4792]: I0319 16:46:08.752243 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:08 crc kubenswrapper[4792]: E0319 16:46:08.752672 4792 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:08 crc kubenswrapper[4792]: I0319 16:46:08.753351 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:08 crc kubenswrapper[4792]: W0319 16:46:08.782747 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-7aa51342ba406b381d70642e0ebf44573530768ae29b59a60e9180fd27fad0cc WatchSource:0}: Error finding container 7aa51342ba406b381d70642e0ebf44573530768ae29b59a60e9180fd27fad0cc: Status 404 returned error can't find the container with id 7aa51342ba406b381d70642e0ebf44573530768ae29b59a60e9180fd27fad0cc Mar 19 16:46:09 crc kubenswrapper[4792]: I0319 16:46:09.639509 4792 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9521f901df4c77e6e544077919eef626416e096a508b6cae50d226564f03809f" exitCode=0 Mar 19 16:46:09 crc kubenswrapper[4792]: I0319 16:46:09.639645 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9521f901df4c77e6e544077919eef626416e096a508b6cae50d226564f03809f"} Mar 19 16:46:09 crc kubenswrapper[4792]: I0319 16:46:09.639986 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7aa51342ba406b381d70642e0ebf44573530768ae29b59a60e9180fd27fad0cc"} Mar 19 16:46:09 crc kubenswrapper[4792]: I0319 16:46:09.640444 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:09 crc kubenswrapper[4792]: I0319 16:46:09.640497 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:09 crc kubenswrapper[4792]: E0319 16:46:09.641216 4792 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:09 crc kubenswrapper[4792]: I0319 16:46:09.641222 4792 status_manager.go:851] "Failed to get status for pod" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:09 crc kubenswrapper[4792]: I0319 16:46:09.641709 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:09 crc kubenswrapper[4792]: I0319 16:46:09.642209 4792 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Mar 19 16:46:10 crc kubenswrapper[4792]: I0319 16:46:10.651654 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"564e2c2075ed68b9d55a164eb14793c3961be1e19fd6f5ab9ea70cd6ac253686"} Mar 19 16:46:10 crc kubenswrapper[4792]: I0319 16:46:10.651926 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2a53f1f2e85fba31e526f24d735def3d3938e89b05e91fe7ebb58b0d645c4ff"} Mar 19 16:46:10 crc kubenswrapper[4792]: I0319 16:46:10.651935 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d9658abbc6a300eba65d280a24549344bb8ee3fd1bab433045017bb5aeb3573d"} Mar 19 16:46:10 crc kubenswrapper[4792]: I0319 16:46:10.651944 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"926e8f04e160763c1ba79690fa2918f1fdf94c03e649f27915d7f2ed8bdcd35b"} Mar 19 16:46:11 crc kubenswrapper[4792]: I0319 16:46:11.660386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1dce7bacae17fa34fa455a83f50e64865a84b0b264c561839bcb9a69b383fddc"} Mar 19 16:46:11 crc kubenswrapper[4792]: I0319 16:46:11.661632 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:11 crc kubenswrapper[4792]: I0319 16:46:11.660809 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:11 crc kubenswrapper[4792]: I0319 16:46:11.661808 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:11 crc kubenswrapper[4792]: I0319 16:46:11.873508 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:46:13 crc kubenswrapper[4792]: I0319 16:46:13.753736 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:13 crc kubenswrapper[4792]: I0319 16:46:13.754220 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:13 crc kubenswrapper[4792]: I0319 16:46:13.760169 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:14 crc kubenswrapper[4792]: I0319 16:46:14.775916 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:46:14 crc kubenswrapper[4792]: I0319 16:46:14.776159 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 19 16:46:14 crc kubenswrapper[4792]: I0319 16:46:14.776249 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 19 16:46:16 crc kubenswrapper[4792]: I0319 16:46:16.672247 4792 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:17 crc kubenswrapper[4792]: I0319 16:46:17.697377 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:17 crc kubenswrapper[4792]: I0319 16:46:17.697719 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:17 crc kubenswrapper[4792]: I0319 16:46:17.704627 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:17 crc kubenswrapper[4792]: I0319 16:46:17.763805 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4e7aa140-cd83-475e-bb42-2ac1785a92e5" Mar 19 16:46:18 crc kubenswrapper[4792]: I0319 16:46:18.704099 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:18 crc kubenswrapper[4792]: I0319 16:46:18.704141 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d4fb2ae4-d8ec-4827-a0ea-0fdc29e372b9" Mar 19 16:46:18 crc kubenswrapper[4792]: I0319 16:46:18.708570 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4e7aa140-cd83-475e-bb42-2ac1785a92e5" Mar 19 16:46:24 crc kubenswrapper[4792]: I0319 16:46:24.775314 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 19 16:46:24 crc kubenswrapper[4792]: I0319 16:46:24.775814 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 19 16:46:25 crc kubenswrapper[4792]: I0319 16:46:25.890270 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 16:46:26 crc kubenswrapper[4792]: I0319 16:46:26.020275 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 19 16:46:26 crc kubenswrapper[4792]: I0319 16:46:26.095637 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 19 16:46:26 crc kubenswrapper[4792]: I0319 16:46:26.667627 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 19 16:46:26 crc kubenswrapper[4792]: I0319 16:46:26.841988 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 19 16:46:26 crc kubenswrapper[4792]: I0319 16:46:26.930032 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 16:46:26 crc kubenswrapper[4792]: I0319 16:46:26.987610 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 19 16:46:27 crc kubenswrapper[4792]: I0319 16:46:27.072539 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 16:46:27 crc kubenswrapper[4792]: I0319 16:46:27.077923 4792 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 16:46:27 crc kubenswrapper[4792]: I0319 16:46:27.323612 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 16:46:27 crc kubenswrapper[4792]: I0319 16:46:27.767048 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.039221 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.309517 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.325975 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.506642 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.682286 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.684085 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.698484 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.735929 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.877628 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.885402 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 16:46:28 crc kubenswrapper[4792]: I0319 16:46:28.984329 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.012500 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.101833 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.155374 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.167122 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.271966 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.276830 4792 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.389682 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.466941 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.532266 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.562549 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.571773 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.708039 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.713987 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.828516 4792 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.871722 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.936933 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 19 16:46:29 crc kubenswrapper[4792]: I0319 16:46:29.963730 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.233282 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.397703 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.435149 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.477048 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.742973 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.744414 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.779506 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.841127 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.844557 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.892057 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.926489 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.926831 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 19 16:46:30 crc kubenswrapper[4792]: I0319 16:46:30.961280 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.015780 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.032674 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.075576 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.130769 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.222712 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.230813 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.305780 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.315894 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.321228 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.351706 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.359442 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.441994 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.443868 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.684987 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.691354 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.695273 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.867680 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.927975 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 19 16:46:31 crc kubenswrapper[4792]: I0319 16:46:31.928714 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.080817 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.196943 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.241085 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.274606 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.313712 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.325701 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.498809 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.507193 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.515098 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.520087 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.628555 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.675738 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.687531 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.738153 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.741155 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.815263 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.826763 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.843997 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.881584 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 16:46:32 crc kubenswrapper[4792]: I0319 16:46:32.975664 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.002921 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.003813 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.013289 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.079702 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.093387 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.094651 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.134008 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.175739 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.196058 4792 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.196699 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.19667894 podStartE2EDuration="40.19667894s" podCreationTimestamp="2026-03-19 16:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:46:16.731988247 +0000 UTC m=+339.878045787" watchObservedRunningTime="2026-03-19 16:46:33.19667894 +0000 UTC m=+356.342736490" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.202055 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.202115 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.207508 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.207725 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.226367 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.226349253 podStartE2EDuration="17.226349253s" podCreationTimestamp="2026-03-19 16:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:46:33.225256704 +0000 UTC m=+356.371314264" watchObservedRunningTime="2026-03-19 16:46:33.226349253 +0000 UTC m=+356.372406803" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.371914 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.387007 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.457762 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.466383 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.582693 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.587733 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.594132 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.626280 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.773315 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.790327 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.822462 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.825215 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.838369 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 19 16:46:33 crc kubenswrapper[4792]: I0319 16:46:33.852458 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.028046 4792 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.128090 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.189605 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.540649 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.715981 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.775750 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.775824 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.775912 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.776607 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f83288cbf8ac45a3a8fe54cadc0e821234a7bbb9411a47fabdad0cc9d54c1332"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.776717 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://f83288cbf8ac45a3a8fe54cadc0e821234a7bbb9411a47fabdad0cc9d54c1332" gracePeriod=30 Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.799505 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.805516 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.907058 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.937669 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.938458 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 16:46:34 crc kubenswrapper[4792]: I0319 16:46:34.974155 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.111634 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.149859 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.201996 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.216394 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.247939 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.267811 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.289341 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.330949 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.373762 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.469417 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.492093 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.511744 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.516132 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.563359 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.571150 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.579421 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.588434 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.652286 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.763526 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.830152 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.895612 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.934387 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.974030 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 16:46:35 crc kubenswrapper[4792]: I0319 16:46:35.996892 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.022251 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.033972 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.065773 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.067573 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.132349 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.145185 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.205903 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.221105 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.253269 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.314722 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.417696 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.469291 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.478441 4792 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.502899 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.517296 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.550003 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.641597 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.655240 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.695888 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.749041 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.768995 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.797872 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.819361 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 16:46:36 crc kubenswrapper[4792]: I0319 16:46:36.838667 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.051009 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.066927 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.142966 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.250679 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.298619 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.300330 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.347185 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.447457 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.455383 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.560116 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.602258 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.681206 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.853220 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.877614 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.879039 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.909330 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 16:46:37 crc kubenswrapper[4792]: I0319 16:46:37.924367 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.058506 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.159922 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.274667 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.325285 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.344254 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.506937 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.611738 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.673793 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.881568 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.897072 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.953908 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.981940 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.988249 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 16:46:38 crc kubenswrapper[4792]: I0319 16:46:38.989755 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.015528 4792 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.015746 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3" gracePeriod=5 Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.026913 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.030505 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.065538 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.069054 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.073324 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.216267 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.354641 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.448880 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.495997 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.496059 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.610493 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.819251 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.874706 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.885204 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 19 16:46:39 crc kubenswrapper[4792]: I0319 16:46:39.933695 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.042428 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.113260 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.117583 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.179535 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.260869 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.288469 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.578171 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.665758 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.725441 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.736949 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 16:46:40 crc kubenswrapper[4792]: I0319 16:46:40.954081 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.214928 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.216455 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.247965 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.249904 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.266186 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.376283 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.429997 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.551009 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.600027 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.745351 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.827192 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 16:46:41 crc kubenswrapper[4792]: I0319 16:46:41.900674 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 16:46:42 crc kubenswrapper[4792]: I0319 16:46:42.062530 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 19 16:46:42 crc kubenswrapper[4792]: I0319 16:46:42.251424 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 16:46:42 crc kubenswrapper[4792]: I0319 16:46:42.459523 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 16:46:42 crc kubenswrapper[4792]: I0319 16:46:42.463354 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 16:46:42 crc kubenswrapper[4792]: I0319 16:46:42.550819 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.214116 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.629144 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.629253 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.784727 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.785245 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.785333 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.785373 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.785384 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.785486 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.785497 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.785506 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.785601 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.786450 4792 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.786490 4792 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.786507 4792 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.786525 4792 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.798023 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.852234 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.852338 4792 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3" exitCode=137 Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.852410 4792 scope.go:117] "RemoveContainer" containerID="1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.852460 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.875320 4792 scope.go:117] "RemoveContainer" containerID="1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3" Mar 19 16:46:44 crc kubenswrapper[4792]: E0319 16:46:44.875865 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3\": container with ID starting with 1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3 not found: ID does not exist" containerID="1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.875911 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3"} err="failed to get container status \"1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3\": rpc error: code = NotFound desc = could not find container \"1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3\": container with ID starting with 1730e9649434f6b77a8fa7f834754abdbbc4e29159eb4f9c070ecc32c3efaae3 not found: ID does not exist" Mar 19 16:46:44 crc kubenswrapper[4792]: I0319 16:46:44.888249 4792 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 19 16:46:45 crc kubenswrapper[4792]: I0319 16:46:45.750646 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 19 16:46:45 crc kubenswrapper[4792]: I0319 16:46:45.751622 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 19 16:46:45 crc kubenswrapper[4792]: I0319 16:46:45.766268 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 19 16:46:45 crc kubenswrapper[4792]: I0319 16:46:45.766309 4792 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a224cba2-c465-40e0-ad39-69c8bf797502" Mar 19 16:46:45 crc kubenswrapper[4792]: I0319 16:46:45.772769 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 19 16:46:45 crc kubenswrapper[4792]: I0319 16:46:45.772829 4792 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a224cba2-c465-40e0-ad39-69c8bf797502" Mar 19 16:46:53 crc kubenswrapper[4792]: I0319 16:46:53.540025 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.550529 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565646-v4pgg"] Mar 19 16:47:01 crc kubenswrapper[4792]: E0319 16:47:01.551292 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" containerName="installer" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.551307 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" containerName="installer" Mar 19 16:47:01 crc kubenswrapper[4792]: E0319 16:47:01.551326 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.551333 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.551446 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdf5c25-1486-4e52-ba7a-381ceb4d6521" containerName="installer" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.551468 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.551888 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565646-v4pgg" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.553449 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.553652 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.553763 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.556298 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565646-v4pgg"] Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.624175 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cbgk\" (UniqueName: \"kubernetes.io/projected/5b93baee-7a36-4e2a-9538-9e3663a1b1ab-kube-api-access-6cbgk\") pod \"auto-csr-approver-29565646-v4pgg\" (UID: \"5b93baee-7a36-4e2a-9538-9e3663a1b1ab\") " pod="openshift-infra/auto-csr-approver-29565646-v4pgg" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.725416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cbgk\" (UniqueName: \"kubernetes.io/projected/5b93baee-7a36-4e2a-9538-9e3663a1b1ab-kube-api-access-6cbgk\") pod \"auto-csr-approver-29565646-v4pgg\" (UID: \"5b93baee-7a36-4e2a-9538-9e3663a1b1ab\") " pod="openshift-infra/auto-csr-approver-29565646-v4pgg" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.782543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cbgk\" (UniqueName: \"kubernetes.io/projected/5b93baee-7a36-4e2a-9538-9e3663a1b1ab-kube-api-access-6cbgk\") pod \"auto-csr-approver-29565646-v4pgg\" (UID: \"5b93baee-7a36-4e2a-9538-9e3663a1b1ab\") " pod="openshift-infra/auto-csr-approver-29565646-v4pgg" Mar 19 16:47:01 crc kubenswrapper[4792]: I0319 16:47:01.883793 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565646-v4pgg" Mar 19 16:47:02 crc kubenswrapper[4792]: I0319 16:47:02.095270 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565646-v4pgg"] Mar 19 16:47:02 crc kubenswrapper[4792]: I0319 16:47:02.969298 4792 generic.go:334] "Generic (PLEG): container finished" podID="317303db-f645-48f1-80f5-23e798ffd8f0" containerID="f20db64ee969bda57ee51ed8b3f4f20a8c6c5fd5908bf8dc058ceff82bb28069" exitCode=0 Mar 19 16:47:02 crc kubenswrapper[4792]: I0319 16:47:02.969384 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" event={"ID":"317303db-f645-48f1-80f5-23e798ffd8f0","Type":"ContainerDied","Data":"f20db64ee969bda57ee51ed8b3f4f20a8c6c5fd5908bf8dc058ceff82bb28069"} Mar 19 16:47:02 crc kubenswrapper[4792]: I0319 16:47:02.970248 4792 scope.go:117] "RemoveContainer" containerID="f20db64ee969bda57ee51ed8b3f4f20a8c6c5fd5908bf8dc058ceff82bb28069" Mar 19 16:47:02 crc kubenswrapper[4792]: I0319 16:47:02.971065 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565646-v4pgg" event={"ID":"5b93baee-7a36-4e2a-9538-9e3663a1b1ab","Type":"ContainerStarted","Data":"27859568c2f00769b382021c98c1dba5dd132c712ed86130976e6a1b7ac69013"} Mar 19 16:47:03 crc kubenswrapper[4792]: I0319 16:47:03.979031 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5jwjp_317303db-f645-48f1-80f5-23e798ffd8f0/marketplace-operator/1.log" Mar 19 16:47:03 crc kubenswrapper[4792]: I0319 16:47:03.979806 4792 generic.go:334] "Generic (PLEG): container finished" podID="317303db-f645-48f1-80f5-23e798ffd8f0" containerID="c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6" exitCode=1 Mar 19 16:47:03 crc kubenswrapper[4792]: I0319 16:47:03.979906 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" event={"ID":"317303db-f645-48f1-80f5-23e798ffd8f0","Type":"ContainerDied","Data":"c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6"} Mar 19 16:47:03 crc kubenswrapper[4792]: I0319 16:47:03.979994 4792 scope.go:117] "RemoveContainer" containerID="f20db64ee969bda57ee51ed8b3f4f20a8c6c5fd5908bf8dc058ceff82bb28069" Mar 19 16:47:03 crc kubenswrapper[4792]: I0319 16:47:03.980522 4792 scope.go:117] "RemoveContainer" containerID="c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6" Mar 19 16:47:03 crc kubenswrapper[4792]: E0319 16:47:03.980734 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-5jwjp_openshift-marketplace(317303db-f645-48f1-80f5-23e798ffd8f0)\"" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" Mar 19 16:47:03 crc kubenswrapper[4792]: I0319 16:47:03.984370 4792 generic.go:334] "Generic (PLEG): container finished" podID="5b93baee-7a36-4e2a-9538-9e3663a1b1ab" containerID="b32560c21d8ad22ed1f526205d102d3286738e6cec6de4a478d825f9b5eb71c7" exitCode=0 Mar 19 16:47:03 crc kubenswrapper[4792]: I0319 16:47:03.984421 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565646-v4pgg" event={"ID":"5b93baee-7a36-4e2a-9538-9e3663a1b1ab","Type":"ContainerDied","Data":"b32560c21d8ad22ed1f526205d102d3286738e6cec6de4a478d825f9b5eb71c7"} Mar 19 16:47:04 crc kubenswrapper[4792]: I0319 16:47:04.992582 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 19 16:47:04 crc kubenswrapper[4792]: I0319 16:47:04.994186 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 19 16:47:04 crc kubenswrapper[4792]: I0319 16:47:04.994740 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 19 16:47:04 crc kubenswrapper[4792]: I0319 16:47:04.994778 4792 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f83288cbf8ac45a3a8fe54cadc0e821234a7bbb9411a47fabdad0cc9d54c1332" exitCode=137 Mar 19 16:47:04 crc kubenswrapper[4792]: I0319 16:47:04.994829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f83288cbf8ac45a3a8fe54cadc0e821234a7bbb9411a47fabdad0cc9d54c1332"} Mar 19 16:47:04 crc kubenswrapper[4792]: I0319 16:47:04.994880 4792 scope.go:117] "RemoveContainer" containerID="8b43f6f806f15696576ad328e3342a93265d8903f205bf84c8bb1a83270ed40e" Mar 19 16:47:04 crc kubenswrapper[4792]: I0319 16:47:04.996139 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5jwjp_317303db-f645-48f1-80f5-23e798ffd8f0/marketplace-operator/1.log" Mar 19 16:47:05 crc kubenswrapper[4792]: I0319 16:47:05.308739 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565646-v4pgg" Mar 19 16:47:05 crc kubenswrapper[4792]: I0319 16:47:05.370685 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cbgk\" (UniqueName: \"kubernetes.io/projected/5b93baee-7a36-4e2a-9538-9e3663a1b1ab-kube-api-access-6cbgk\") pod \"5b93baee-7a36-4e2a-9538-9e3663a1b1ab\" (UID: \"5b93baee-7a36-4e2a-9538-9e3663a1b1ab\") " Mar 19 16:47:05 crc kubenswrapper[4792]: I0319 16:47:05.378976 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b93baee-7a36-4e2a-9538-9e3663a1b1ab-kube-api-access-6cbgk" (OuterVolumeSpecName: "kube-api-access-6cbgk") pod "5b93baee-7a36-4e2a-9538-9e3663a1b1ab" (UID: "5b93baee-7a36-4e2a-9538-9e3663a1b1ab"). InnerVolumeSpecName "kube-api-access-6cbgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:47:05 crc kubenswrapper[4792]: I0319 16:47:05.472674 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cbgk\" (UniqueName: \"kubernetes.io/projected/5b93baee-7a36-4e2a-9538-9e3663a1b1ab-kube-api-access-6cbgk\") on node \"crc\" DevicePath \"\"" Mar 19 16:47:06 crc kubenswrapper[4792]: I0319 16:47:06.004429 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565646-v4pgg" event={"ID":"5b93baee-7a36-4e2a-9538-9e3663a1b1ab","Type":"ContainerDied","Data":"27859568c2f00769b382021c98c1dba5dd132c712ed86130976e6a1b7ac69013"} Mar 19 16:47:06 crc kubenswrapper[4792]: I0319 16:47:06.004738 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27859568c2f00769b382021c98c1dba5dd132c712ed86130976e6a1b7ac69013" Mar 19 16:47:06 crc kubenswrapper[4792]: I0319 16:47:06.004469 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565646-v4pgg" Mar 19 16:47:06 crc kubenswrapper[4792]: I0319 16:47:06.008997 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 19 16:47:06 crc kubenswrapper[4792]: I0319 16:47:06.011151 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 19 16:47:06 crc kubenswrapper[4792]: I0319 16:47:06.011205 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"407ceb7c9d1479843ac92c2ca9be9148d9d8729632273de5e73ec45afef9c04e"} Mar 19 16:47:11 crc kubenswrapper[4792]: I0319 16:47:11.381740 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:47:11 crc kubenswrapper[4792]: I0319 16:47:11.382271 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:47:11 crc kubenswrapper[4792]: I0319 16:47:11.382893 4792 scope.go:117] "RemoveContainer" containerID="c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6" Mar 19 16:47:11 crc kubenswrapper[4792]: E0319 16:47:11.383225 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-5jwjp_openshift-marketplace(317303db-f645-48f1-80f5-23e798ffd8f0)\"" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" Mar 19 16:47:11 crc kubenswrapper[4792]: I0319 16:47:11.873615 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:47:14 crc kubenswrapper[4792]: I0319 16:47:14.776053 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:47:14 crc kubenswrapper[4792]: I0319 16:47:14.782149 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:47:15 crc kubenswrapper[4792]: I0319 16:47:15.082991 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 19 16:47:16 crc kubenswrapper[4792]: I0319 16:47:16.355320 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 16:47:24 crc kubenswrapper[4792]: I0319 16:47:24.763652 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 16:47:25 crc kubenswrapper[4792]: I0319 16:47:25.739990 4792 scope.go:117] "RemoveContainer" containerID="c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6" Mar 19 16:47:26 crc kubenswrapper[4792]: I0319 16:47:26.149668 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5jwjp_317303db-f645-48f1-80f5-23e798ffd8f0/marketplace-operator/1.log" Mar 19 16:47:26 crc kubenswrapper[4792]: I0319 16:47:26.149721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" event={"ID":"317303db-f645-48f1-80f5-23e798ffd8f0","Type":"ContainerStarted","Data":"04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430"} Mar 19 16:47:26 crc kubenswrapper[4792]: I0319 16:47:26.150062 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:47:26 crc kubenswrapper[4792]: I0319 16:47:26.151101 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5jwjp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Mar 19 16:47:26 crc kubenswrapper[4792]: I0319 16:47:26.151152 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Mar 19 16:47:27 crc kubenswrapper[4792]: I0319 16:47:27.157740 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:47:50 crc kubenswrapper[4792]: I0319 16:47:50.231469 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:47:50 crc kubenswrapper[4792]: I0319 16:47:50.232151 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.147518 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565648-v84mv"] Mar 19 16:48:00 crc kubenswrapper[4792]: E0319 16:48:00.148281 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b93baee-7a36-4e2a-9538-9e3663a1b1ab" containerName="oc" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.148295 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b93baee-7a36-4e2a-9538-9e3663a1b1ab" containerName="oc" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.148405 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b93baee-7a36-4e2a-9538-9e3663a1b1ab" containerName="oc" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.148790 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565648-v84mv" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.150752 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.151006 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.151313 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.157335 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565648-v84mv"] Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.222179 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfk7d\" (UniqueName: \"kubernetes.io/projected/1f9c6bb2-5fe1-41f6-bd92-f274417cbe62-kube-api-access-wfk7d\") pod \"auto-csr-approver-29565648-v84mv\" (UID: \"1f9c6bb2-5fe1-41f6-bd92-f274417cbe62\") " pod="openshift-infra/auto-csr-approver-29565648-v84mv" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.323945 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfk7d\" (UniqueName: \"kubernetes.io/projected/1f9c6bb2-5fe1-41f6-bd92-f274417cbe62-kube-api-access-wfk7d\") pod \"auto-csr-approver-29565648-v84mv\" (UID: \"1f9c6bb2-5fe1-41f6-bd92-f274417cbe62\") " pod="openshift-infra/auto-csr-approver-29565648-v84mv" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.342460 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfk7d\" (UniqueName: \"kubernetes.io/projected/1f9c6bb2-5fe1-41f6-bd92-f274417cbe62-kube-api-access-wfk7d\") pod \"auto-csr-approver-29565648-v84mv\" (UID: \"1f9c6bb2-5fe1-41f6-bd92-f274417cbe62\") " pod="openshift-infra/auto-csr-approver-29565648-v84mv" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.466160 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565648-v84mv" Mar 19 16:48:00 crc kubenswrapper[4792]: I0319 16:48:00.682802 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565648-v84mv"] Mar 19 16:48:01 crc kubenswrapper[4792]: I0319 16:48:01.347246 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565648-v84mv" event={"ID":"1f9c6bb2-5fe1-41f6-bd92-f274417cbe62","Type":"ContainerStarted","Data":"8c34789f737a4a5b07fa14b2494327ef0df6694caf9b70e7a9fc2cf71314b2ed"} Mar 19 16:48:02 crc kubenswrapper[4792]: I0319 16:48:02.354151 4792 generic.go:334] "Generic (PLEG): container finished" podID="1f9c6bb2-5fe1-41f6-bd92-f274417cbe62" containerID="d29c6d45e264553fe70a8bf0fe437bb6e533fab99caed59de79a2f29296e163b" exitCode=0 Mar 19 16:48:02 crc kubenswrapper[4792]: I0319 16:48:02.354245 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565648-v84mv" event={"ID":"1f9c6bb2-5fe1-41f6-bd92-f274417cbe62","Type":"ContainerDied","Data":"d29c6d45e264553fe70a8bf0fe437bb6e533fab99caed59de79a2f29296e163b"} Mar 19 16:48:03 crc kubenswrapper[4792]: I0319 16:48:03.631671 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565648-v84mv" Mar 19 16:48:03 crc kubenswrapper[4792]: I0319 16:48:03.767674 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfk7d\" (UniqueName: \"kubernetes.io/projected/1f9c6bb2-5fe1-41f6-bd92-f274417cbe62-kube-api-access-wfk7d\") pod \"1f9c6bb2-5fe1-41f6-bd92-f274417cbe62\" (UID: \"1f9c6bb2-5fe1-41f6-bd92-f274417cbe62\") " Mar 19 16:48:03 crc kubenswrapper[4792]: I0319 16:48:03.772541 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9c6bb2-5fe1-41f6-bd92-f274417cbe62-kube-api-access-wfk7d" (OuterVolumeSpecName: "kube-api-access-wfk7d") pod "1f9c6bb2-5fe1-41f6-bd92-f274417cbe62" (UID: "1f9c6bb2-5fe1-41f6-bd92-f274417cbe62"). InnerVolumeSpecName "kube-api-access-wfk7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:48:03 crc kubenswrapper[4792]: I0319 16:48:03.869088 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfk7d\" (UniqueName: \"kubernetes.io/projected/1f9c6bb2-5fe1-41f6-bd92-f274417cbe62-kube-api-access-wfk7d\") on node \"crc\" DevicePath \"\"" Mar 19 16:48:04 crc kubenswrapper[4792]: I0319 16:48:04.376571 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565648-v84mv" event={"ID":"1f9c6bb2-5fe1-41f6-bd92-f274417cbe62","Type":"ContainerDied","Data":"8c34789f737a4a5b07fa14b2494327ef0df6694caf9b70e7a9fc2cf71314b2ed"} Mar 19 16:48:04 crc kubenswrapper[4792]: I0319 16:48:04.376908 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c34789f737a4a5b07fa14b2494327ef0df6694caf9b70e7a9fc2cf71314b2ed" Mar 19 16:48:04 crc kubenswrapper[4792]: I0319 16:48:04.376635 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565648-v84mv" Mar 19 16:48:20 crc kubenswrapper[4792]: I0319 16:48:20.230673 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:48:20 crc kubenswrapper[4792]: I0319 16:48:20.231318 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.108584 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-prfmr"] Mar 19 16:48:43 crc kubenswrapper[4792]: E0319 16:48:43.109382 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9c6bb2-5fe1-41f6-bd92-f274417cbe62" containerName="oc" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.109396 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9c6bb2-5fe1-41f6-bd92-f274417cbe62" containerName="oc" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.109523 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9c6bb2-5fe1-41f6-bd92-f274417cbe62" containerName="oc" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.109957 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.130558 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-prfmr"] Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.232700 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt7t7\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-kube-api-access-mt7t7\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.232775 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.232816 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-trusted-ca\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.232864 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-bound-sa-token\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.232892 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-registry-tls\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.232925 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.232979 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-registry-certificates\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.233184 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.254266 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.334418 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt7t7\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-kube-api-access-mt7t7\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.334497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-trusted-ca\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.334530 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-bound-sa-token\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.334553 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-registry-tls\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.334586 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.334609 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-registry-certificates\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.334638 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.335673 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.336247 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-trusted-ca\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.336303 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-registry-certificates\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.340129 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.341293 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-registry-tls\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.350009 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-bound-sa-token\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.351082 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt7t7\" (UniqueName: \"kubernetes.io/projected/7c6f611e-37c6-424d-9c46-32a92c5ac3b7-kube-api-access-mt7t7\") pod \"image-registry-66df7c8f76-prfmr\" (UID: \"7c6f611e-37c6-424d-9c46-32a92c5ac3b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.442611 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:43 crc kubenswrapper[4792]: I0319 16:48:43.844025 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-prfmr"] Mar 19 16:48:44 crc kubenswrapper[4792]: I0319 16:48:44.617212 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" event={"ID":"7c6f611e-37c6-424d-9c46-32a92c5ac3b7","Type":"ContainerStarted","Data":"623193e0c5304e617dd8066a9b723a9aab4775663bdc4274103531c7920ae7a7"} Mar 19 16:48:44 crc kubenswrapper[4792]: I0319 16:48:44.618507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" event={"ID":"7c6f611e-37c6-424d-9c46-32a92c5ac3b7","Type":"ContainerStarted","Data":"43fd32e302965654e0063aaa8dcd6437c798b0db0ead03174cc0543c31103966"} Mar 19 16:48:44 crc kubenswrapper[4792]: I0319 16:48:44.618642 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:48:44 crc kubenswrapper[4792]: I0319 16:48:44.634046 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" podStartSLOduration=1.6340232860000001 podStartE2EDuration="1.634023286s" podCreationTimestamp="2026-03-19 16:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:48:44.631673142 +0000 UTC m=+487.777730682" watchObservedRunningTime="2026-03-19 16:48:44.634023286 +0000 UTC m=+487.780080836" Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.231293 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.232091 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.232175 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.232975 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c44ae9d61ca8c53f504eaf0d9805dc6eed17635a96b271ff98bf7bf2821e64ef"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.233081 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://c44ae9d61ca8c53f504eaf0d9805dc6eed17635a96b271ff98bf7bf2821e64ef" gracePeriod=600 Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.657698 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="c44ae9d61ca8c53f504eaf0d9805dc6eed17635a96b271ff98bf7bf2821e64ef" exitCode=0 Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.657804 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"c44ae9d61ca8c53f504eaf0d9805dc6eed17635a96b271ff98bf7bf2821e64ef"} Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.658249 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"b47f858f0b64f2da0774e4353d257362e15551e1c4b2ea1e77e5a1d5a1fb4edb"} Mar 19 16:48:50 crc kubenswrapper[4792]: I0319 16:48:50.658314 4792 scope.go:117] "RemoveContainer" containerID="040a95ee0c379cb6dfbc9cfc1291393f5be39bc3168dfa04bf40b920267ba08e" Mar 19 16:49:03 crc kubenswrapper[4792]: I0319 16:49:03.450341 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" Mar 19 16:49:03 crc kubenswrapper[4792]: I0319 16:49:03.502854 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fthfn"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.042966 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4724c"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.044488 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4724c" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerName="registry-server" containerID="cri-o://9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff" gracePeriod=30 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.049292 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qwbvn"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.049658 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qwbvn" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerName="registry-server" containerID="cri-o://b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c" gracePeriod=30 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.075324 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwjp"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.075624 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" containerID="cri-o://04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430" gracePeriod=30 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.077858 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pth"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.078117 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5pth" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerName="registry-server" containerID="cri-o://53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9" gracePeriod=30 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.088970 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25ctb"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.089440 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-25ctb" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="registry-server" containerID="cri-o://4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558" gracePeriod=30 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.094151 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vswr4"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.095018 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.100623 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vswr4"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.191076 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9js7m\" (UniqueName: \"kubernetes.io/projected/a9918a46-a0e8-400e-bd0c-0af4b0d05339-kube-api-access-9js7m\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.191139 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9918a46-a0e8-400e-bd0c-0af4b0d05339-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.191192 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9918a46-a0e8-400e-bd0c-0af4b0d05339-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.294442 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9918a46-a0e8-400e-bd0c-0af4b0d05339-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.294757 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9918a46-a0e8-400e-bd0c-0af4b0d05339-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.294812 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9js7m\" (UniqueName: \"kubernetes.io/projected/a9918a46-a0e8-400e-bd0c-0af4b0d05339-kube-api-access-9js7m\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.296865 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9918a46-a0e8-400e-bd0c-0af4b0d05339-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.310802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9918a46-a0e8-400e-bd0c-0af4b0d05339-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.313130 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9js7m\" (UniqueName: \"kubernetes.io/projected/a9918a46-a0e8-400e-bd0c-0af4b0d05339-kube-api-access-9js7m\") pod \"marketplace-operator-79b997595-vswr4\" (UID: \"a9918a46-a0e8-400e-bd0c-0af4b0d05339\") " pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.410098 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.424249 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.456985 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5jwjp_317303db-f645-48f1-80f5-23e798ffd8f0/marketplace-operator/1.log" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.457070 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.467658 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.535780 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.543642 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598374 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjtvj\" (UniqueName: \"kubernetes.io/projected/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-kube-api-access-wjtvj\") pod \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598424 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-catalog-content\") pod \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598475 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7llr7\" (UniqueName: \"kubernetes.io/projected/317303db-f645-48f1-80f5-23e798ffd8f0-kube-api-access-7llr7\") pod \"317303db-f645-48f1-80f5-23e798ffd8f0\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598500 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwjzc\" (UniqueName: \"kubernetes.io/projected/efcab6c7-88f0-4335-a972-bdd8933433dc-kube-api-access-bwjzc\") pod \"efcab6c7-88f0-4335-a972-bdd8933433dc\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598537 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-trusted-ca\") pod \"317303db-f645-48f1-80f5-23e798ffd8f0\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598562 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-utilities\") pod \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\" (UID: \"f04d1453-ed31-4e0f-a10c-89ebac7f8f51\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598587 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-catalog-content\") pod \"efcab6c7-88f0-4335-a972-bdd8933433dc\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598617 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-operator-metrics\") pod \"317303db-f645-48f1-80f5-23e798ffd8f0\" (UID: \"317303db-f645-48f1-80f5-23e798ffd8f0\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.598643 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-utilities\") pod \"efcab6c7-88f0-4335-a972-bdd8933433dc\" (UID: \"efcab6c7-88f0-4335-a972-bdd8933433dc\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.600155 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-utilities" (OuterVolumeSpecName: "utilities") pod "efcab6c7-88f0-4335-a972-bdd8933433dc" (UID: "efcab6c7-88f0-4335-a972-bdd8933433dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.600803 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "317303db-f645-48f1-80f5-23e798ffd8f0" (UID: "317303db-f645-48f1-80f5-23e798ffd8f0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.601656 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-utilities" (OuterVolumeSpecName: "utilities") pod "f04d1453-ed31-4e0f-a10c-89ebac7f8f51" (UID: "f04d1453-ed31-4e0f-a10c-89ebac7f8f51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.602978 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efcab6c7-88f0-4335-a972-bdd8933433dc-kube-api-access-bwjzc" (OuterVolumeSpecName: "kube-api-access-bwjzc") pod "efcab6c7-88f0-4335-a972-bdd8933433dc" (UID: "efcab6c7-88f0-4335-a972-bdd8933433dc"). InnerVolumeSpecName "kube-api-access-bwjzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.605576 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-kube-api-access-wjtvj" (OuterVolumeSpecName: "kube-api-access-wjtvj") pod "f04d1453-ed31-4e0f-a10c-89ebac7f8f51" (UID: "f04d1453-ed31-4e0f-a10c-89ebac7f8f51"). InnerVolumeSpecName "kube-api-access-wjtvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.620719 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317303db-f645-48f1-80f5-23e798ffd8f0-kube-api-access-7llr7" (OuterVolumeSpecName: "kube-api-access-7llr7") pod "317303db-f645-48f1-80f5-23e798ffd8f0" (UID: "317303db-f645-48f1-80f5-23e798ffd8f0"). InnerVolumeSpecName "kube-api-access-7llr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.622462 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "317303db-f645-48f1-80f5-23e798ffd8f0" (UID: "317303db-f645-48f1-80f5-23e798ffd8f0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.659870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f04d1453-ed31-4e0f-a10c-89ebac7f8f51" (UID: "f04d1453-ed31-4e0f-a10c-89ebac7f8f51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.665464 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vswr4"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.668577 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efcab6c7-88f0-4335-a972-bdd8933433dc" (UID: "efcab6c7-88f0-4335-a972-bdd8933433dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.699470 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwssq\" (UniqueName: \"kubernetes.io/projected/de7d0c67-0339-42c9-8330-f80dfd39c860-kube-api-access-pwssq\") pod \"de7d0c67-0339-42c9-8330-f80dfd39c860\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.699538 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgzrl\" (UniqueName: \"kubernetes.io/projected/7b49f828-0dec-4a3f-9247-7ef8b8882b52-kube-api-access-vgzrl\") pod \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.699581 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-utilities\") pod \"de7d0c67-0339-42c9-8330-f80dfd39c860\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.699612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-catalog-content\") pod \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.699658 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-catalog-content\") pod \"de7d0c67-0339-42c9-8330-f80dfd39c860\" (UID: \"de7d0c67-0339-42c9-8330-f80dfd39c860\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.699737 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-utilities\") pod \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\" (UID: \"7b49f828-0dec-4a3f-9247-7ef8b8882b52\") " Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700008 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7llr7\" (UniqueName: \"kubernetes.io/projected/317303db-f645-48f1-80f5-23e798ffd8f0-kube-api-access-7llr7\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700032 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwjzc\" (UniqueName: \"kubernetes.io/projected/efcab6c7-88f0-4335-a972-bdd8933433dc-kube-api-access-bwjzc\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700046 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700058 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700068 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700079 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efcab6c7-88f0-4335-a972-bdd8933433dc-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700089 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/317303db-f645-48f1-80f5-23e798ffd8f0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700101 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjtvj\" (UniqueName: \"kubernetes.io/projected/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-kube-api-access-wjtvj\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.700111 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d1453-ed31-4e0f-a10c-89ebac7f8f51-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.701166 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-utilities" (OuterVolumeSpecName: "utilities") pod "7b49f828-0dec-4a3f-9247-7ef8b8882b52" (UID: "7b49f828-0dec-4a3f-9247-7ef8b8882b52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.704151 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-utilities" (OuterVolumeSpecName: "utilities") pod "de7d0c67-0339-42c9-8330-f80dfd39c860" (UID: "de7d0c67-0339-42c9-8330-f80dfd39c860"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.705966 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b49f828-0dec-4a3f-9247-7ef8b8882b52-kube-api-access-vgzrl" (OuterVolumeSpecName: "kube-api-access-vgzrl") pod "7b49f828-0dec-4a3f-9247-7ef8b8882b52" (UID: "7b49f828-0dec-4a3f-9247-7ef8b8882b52"). InnerVolumeSpecName "kube-api-access-vgzrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.719747 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7d0c67-0339-42c9-8330-f80dfd39c860-kube-api-access-pwssq" (OuterVolumeSpecName: "kube-api-access-pwssq") pod "de7d0c67-0339-42c9-8330-f80dfd39c860" (UID: "de7d0c67-0339-42c9-8330-f80dfd39c860"). InnerVolumeSpecName "kube-api-access-pwssq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.731383 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b49f828-0dec-4a3f-9247-7ef8b8882b52" (UID: "7b49f828-0dec-4a3f-9247-7ef8b8882b52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.801036 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwssq\" (UniqueName: \"kubernetes.io/projected/de7d0c67-0339-42c9-8330-f80dfd39c860-kube-api-access-pwssq\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.801065 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgzrl\" (UniqueName: \"kubernetes.io/projected/7b49f828-0dec-4a3f-9247-7ef8b8882b52-kube-api-access-vgzrl\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.801074 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.801083 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.801092 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b49f828-0dec-4a3f-9247-7ef8b8882b52-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.830094 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de7d0c67-0339-42c9-8330-f80dfd39c860" (UID: "de7d0c67-0339-42c9-8330-f80dfd39c860"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.902310 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7d0c67-0339-42c9-8330-f80dfd39c860-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.914169 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" event={"ID":"a9918a46-a0e8-400e-bd0c-0af4b0d05339","Type":"ContainerStarted","Data":"f7f8a3a948ac34c87dd12ecdca73532f8f71ec116da7d0b41e9d3f35cbe218ce"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.914456 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.914555 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" event={"ID":"a9918a46-a0e8-400e-bd0c-0af4b0d05339","Type":"ContainerStarted","Data":"75d64858fb38ec25e673118424c734a3faac67e5591cbf5cd396d6265214c7e1"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.916378 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vswr4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": dial tcp 10.217.0.72:8080: connect: connection refused" start-of-body= Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.916510 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" podUID="a9918a46-a0e8-400e-bd0c-0af4b0d05339" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": dial tcp 10.217.0.72:8080: connect: connection refused" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.917652 4792 generic.go:334] "Generic (PLEG): container finished" podID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerID="4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558" exitCode=0 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.917780 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ctb" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.917746 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ctb" event={"ID":"de7d0c67-0339-42c9-8330-f80dfd39c860","Type":"ContainerDied","Data":"4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.917966 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ctb" event={"ID":"de7d0c67-0339-42c9-8330-f80dfd39c860","Type":"ContainerDied","Data":"bb99d12c3721d9d4a893146fbcde3f2f55767fc6f1fba97dd8eb4d25ef9fe898"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.918030 4792 scope.go:117] "RemoveContainer" containerID="4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.922739 4792 generic.go:334] "Generic (PLEG): container finished" podID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerID="b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c" exitCode=0 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.922893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwbvn" event={"ID":"efcab6c7-88f0-4335-a972-bdd8933433dc","Type":"ContainerDied","Data":"b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.922953 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwbvn" event={"ID":"efcab6c7-88f0-4335-a972-bdd8933433dc","Type":"ContainerDied","Data":"fbbeb8b2949a9316867c2869e8085395d0a7f0557e42c0bbcdecc6bab0f670da"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.923080 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwbvn" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.925155 4792 generic.go:334] "Generic (PLEG): container finished" podID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerID="9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff" exitCode=0 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.925294 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4724c" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.925554 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4724c" event={"ID":"f04d1453-ed31-4e0f-a10c-89ebac7f8f51","Type":"ContainerDied","Data":"9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.925672 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4724c" event={"ID":"f04d1453-ed31-4e0f-a10c-89ebac7f8f51","Type":"ContainerDied","Data":"62ff0e797d1d9f6c96535d715b5cd94194c02f15a2e3845a59523179fdc83c45"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.927761 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerID="53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9" exitCode=0 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.927813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pth" event={"ID":"7b49f828-0dec-4a3f-9247-7ef8b8882b52","Type":"ContainerDied","Data":"53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.928073 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pth" event={"ID":"7b49f828-0dec-4a3f-9247-7ef8b8882b52","Type":"ContainerDied","Data":"6d092c6c7d989b6b100f88c9bb43f51865b5189fa21ce5f81aceec6760b3f56d"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.927874 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pth" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.934266 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" podStartSLOduration=0.93425356 podStartE2EDuration="934.25356ms" podCreationTimestamp="2026-03-19 16:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:49:08.932327356 +0000 UTC m=+512.078384916" watchObservedRunningTime="2026-03-19 16:49:08.93425356 +0000 UTC m=+512.080311100" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.936488 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5jwjp_317303db-f645-48f1-80f5-23e798ffd8f0/marketplace-operator/1.log" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.936579 4792 generic.go:334] "Generic (PLEG): container finished" podID="317303db-f645-48f1-80f5-23e798ffd8f0" containerID="04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430" exitCode=0 Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.936617 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.936656 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" event={"ID":"317303db-f645-48f1-80f5-23e798ffd8f0","Type":"ContainerDied","Data":"04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.936698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwjp" event={"ID":"317303db-f645-48f1-80f5-23e798ffd8f0","Type":"ContainerDied","Data":"2f03a53615a08b64d80438bad54ab288d0e5bbefffbe7645b86fa79679e4b407"} Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.941509 4792 scope.go:117] "RemoveContainer" containerID="73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.959055 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25ctb"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.968750 4792 scope.go:117] "RemoveContainer" containerID="e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc" Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.970298 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-25ctb"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.980636 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pth"] Mar 19 16:49:08 crc kubenswrapper[4792]: I0319 16:49:08.984888 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pth"] Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.004912 4792 scope.go:117] "RemoveContainer" containerID="4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.004930 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qwbvn"] Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.005512 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558\": container with ID starting with 4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558 not found: ID does not exist" containerID="4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.005627 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558"} err="failed to get container status \"4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558\": rpc error: code = NotFound desc = could not find container \"4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558\": container with ID starting with 4287f9e329c80e46a45e83389049afdb7fbb7175a4b34f12115777a873965558 not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.005716 4792 scope.go:117] "RemoveContainer" containerID="73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.007299 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0\": container with ID starting with 73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0 not found: ID does not exist" containerID="73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.007347 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0"} err="failed to get container status \"73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0\": rpc error: code = NotFound desc = could not find container \"73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0\": container with ID starting with 73bf2c684babf882a68f11759b8d9dad4014f4e2dd2ed532ef12bf5aeccc2cd0 not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.007919 4792 scope.go:117] "RemoveContainer" containerID="e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.009450 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc\": container with ID starting with e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc not found: ID does not exist" containerID="e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.009491 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc"} err="failed to get container status \"e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc\": rpc error: code = NotFound desc = could not find container \"e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc\": container with ID starting with e90df6f0f4f2d712e7e8968acaa76a57bbbb08b9eb7a00a714187e45877b3edc not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.009568 4792 scope.go:117] "RemoveContainer" containerID="b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.013769 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qwbvn"] Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.019387 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4724c"] Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.027583 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4724c"] Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.030042 4792 scope.go:117] "RemoveContainer" containerID="05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.031555 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwjp"] Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.035255 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwjp"] Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.044553 4792 scope.go:117] "RemoveContainer" containerID="74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.060065 4792 scope.go:117] "RemoveContainer" containerID="b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.060548 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c\": container with ID starting with b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c not found: ID does not exist" containerID="b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.060595 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c"} err="failed to get container status \"b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c\": rpc error: code = NotFound desc = could not find container \"b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c\": container with ID starting with b226c9cdf6a0470feb46928771540997b43c8efb90b8564c97b46232dbad026c not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.060617 4792 scope.go:117] "RemoveContainer" containerID="05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.061115 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe\": container with ID starting with 05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe not found: ID does not exist" containerID="05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.061134 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe"} err="failed to get container status \"05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe\": rpc error: code = NotFound desc = could not find container \"05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe\": container with ID starting with 05a241e457fa239a72e6b692809bc65568fd6da85274b29bf730f0dd5cb66fbe not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.061171 4792 scope.go:117] "RemoveContainer" containerID="74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.061467 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c\": container with ID starting with 74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c not found: ID does not exist" containerID="74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.061515 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c"} err="failed to get container status \"74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c\": rpc error: code = NotFound desc = could not find container \"74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c\": container with ID starting with 74ea60db58f20d7f2600b5f463ad68654ffa56ad014bb0e9a8cfbcca311b430c not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.061547 4792 scope.go:117] "RemoveContainer" containerID="9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.076985 4792 scope.go:117] "RemoveContainer" containerID="fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.090195 4792 scope.go:117] "RemoveContainer" containerID="6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.103783 4792 scope.go:117] "RemoveContainer" containerID="9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.104183 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff\": container with ID starting with 9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff not found: ID does not exist" containerID="9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.104240 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff"} err="failed to get container status \"9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff\": rpc error: code = NotFound desc = could not find container \"9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff\": container with ID starting with 9f281d88e5486e4a2da9dd7daee761a418858e706121d8f94249d47b18adf8ff not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.104274 4792 scope.go:117] "RemoveContainer" containerID="fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.104582 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd\": container with ID starting with fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd not found: ID does not exist" containerID="fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.104621 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd"} err="failed to get container status \"fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd\": rpc error: code = NotFound desc = could not find container \"fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd\": container with ID starting with fd9a313e04e367e1fd6ee07abe66c86f94b5979d5b9ceb8f2419ae9dec16b8fd not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.104678 4792 scope.go:117] "RemoveContainer" containerID="6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.105310 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676\": container with ID starting with 6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676 not found: ID does not exist" containerID="6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.105344 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676"} err="failed to get container status \"6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676\": rpc error: code = NotFound desc = could not find container \"6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676\": container with ID starting with 6ba6ec85789391ec3d82a40ab706b48b9ab9bdbc0f0ddafd251a2a0ad2319676 not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.105364 4792 scope.go:117] "RemoveContainer" containerID="53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.116617 4792 scope.go:117] "RemoveContainer" containerID="d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.132098 4792 scope.go:117] "RemoveContainer" containerID="88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.144699 4792 scope.go:117] "RemoveContainer" containerID="53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.145051 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9\": container with ID starting with 53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9 not found: ID does not exist" containerID="53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.145078 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9"} err="failed to get container status \"53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9\": rpc error: code = NotFound desc = could not find container \"53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9\": container with ID starting with 53128bb0c599f9fce7c44c1a2a33cc0884794ebfb6dda11b06f76b1c9f9cf9c9 not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.145096 4792 scope.go:117] "RemoveContainer" containerID="d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.145501 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7\": container with ID starting with d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7 not found: ID does not exist" containerID="d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.145544 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7"} err="failed to get container status \"d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7\": rpc error: code = NotFound desc = could not find container \"d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7\": container with ID starting with d626e1a5e2082dd8b25ebfc51992276b74c11d3de9a5eb10588f94971dc266c7 not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.145575 4792 scope.go:117] "RemoveContainer" containerID="88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.145884 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d\": container with ID starting with 88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d not found: ID does not exist" containerID="88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.145911 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d"} err="failed to get container status \"88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d\": rpc error: code = NotFound desc = could not find container \"88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d\": container with ID starting with 88635ce62d09f7125aefea03a9abc332c77d6045026e2719434922b3e25b6f5d not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.145928 4792 scope.go:117] "RemoveContainer" containerID="04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.158810 4792 scope.go:117] "RemoveContainer" containerID="c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.172705 4792 scope.go:117] "RemoveContainer" containerID="04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.177782 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430\": container with ID starting with 04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430 not found: ID does not exist" containerID="04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.177872 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430"} err="failed to get container status \"04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430\": rpc error: code = NotFound desc = could not find container \"04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430\": container with ID starting with 04e05b46d800ce2610f25f79d554f64975f54422be989d06be33f10d490d0430 not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.177906 4792 scope.go:117] "RemoveContainer" containerID="c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6" Mar 19 16:49:09 crc kubenswrapper[4792]: E0319 16:49:09.178556 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6\": container with ID starting with c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6 not found: ID does not exist" containerID="c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.178599 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6"} err="failed to get container status \"c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6\": rpc error: code = NotFound desc = could not find container \"c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6\": container with ID starting with c10be19ac309d292c68d9562e6c52c691bc7a9924b715f99d68e2646cdb45ae6 not found: ID does not exist" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.748372 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" path="/var/lib/kubelet/pods/317303db-f645-48f1-80f5-23e798ffd8f0/volumes" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.749768 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" path="/var/lib/kubelet/pods/7b49f828-0dec-4a3f-9247-7ef8b8882b52/volumes" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.751004 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" path="/var/lib/kubelet/pods/de7d0c67-0339-42c9-8330-f80dfd39c860/volumes" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.753556 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" path="/var/lib/kubelet/pods/efcab6c7-88f0-4335-a972-bdd8933433dc/volumes" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.754787 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" path="/var/lib/kubelet/pods/f04d1453-ed31-4e0f-a10c-89ebac7f8f51/volumes" Mar 19 16:49:09 crc kubenswrapper[4792]: I0319 16:49:09.950266 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254459 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5hq59"] Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254721 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254739 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254753 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerName="extract-content" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254760 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerName="extract-content" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254771 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerName="extract-utilities" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254779 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerName="extract-utilities" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254791 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254799 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254808 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="extract-utilities" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254815 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="extract-utilities" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254825 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254832 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254860 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="extract-content" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254868 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="extract-content" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254878 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254885 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254895 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerName="extract-content" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254902 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerName="extract-content" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254909 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254915 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254925 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerName="extract-utilities" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254932 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerName="extract-utilities" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254942 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerName="extract-utilities" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254949 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerName="extract-utilities" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254963 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerName="extract-content" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254970 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerName="extract-content" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.254981 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.254988 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.255222 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b49f828-0dec-4a3f-9247-7ef8b8882b52" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.255239 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="efcab6c7-88f0-4335-a972-bdd8933433dc" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.255248 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.255290 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04d1453-ed31-4e0f-a10c-89ebac7f8f51" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.255302 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="de7d0c67-0339-42c9-8330-f80dfd39c860" containerName="registry-server" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.255314 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: E0319 16:49:10.255445 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.255455 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.255640 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="317303db-f645-48f1-80f5-23e798ffd8f0" containerName="marketplace-operator" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.256260 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.258610 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.261106 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hq59"] Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.420766 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/380412c4-57ca-4428-838c-ab93fc6c71cc-utilities\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.420830 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/380412c4-57ca-4428-838c-ab93fc6c71cc-catalog-content\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.420931 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhnfn\" (UniqueName: \"kubernetes.io/projected/380412c4-57ca-4428-838c-ab93fc6c71cc-kube-api-access-rhnfn\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.459207 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h7gpk"] Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.460596 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.463567 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.466889 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h7gpk"] Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.522434 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/380412c4-57ca-4428-838c-ab93fc6c71cc-utilities\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.522491 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/380412c4-57ca-4428-838c-ab93fc6c71cc-catalog-content\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.522555 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhnfn\" (UniqueName: \"kubernetes.io/projected/380412c4-57ca-4428-838c-ab93fc6c71cc-kube-api-access-rhnfn\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.522916 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/380412c4-57ca-4428-838c-ab93fc6c71cc-utilities\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.522982 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/380412c4-57ca-4428-838c-ab93fc6c71cc-catalog-content\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.548690 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhnfn\" (UniqueName: \"kubernetes.io/projected/380412c4-57ca-4428-838c-ab93fc6c71cc-kube-api-access-rhnfn\") pod \"redhat-marketplace-5hq59\" (UID: \"380412c4-57ca-4428-838c-ab93fc6c71cc\") " pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.576143 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.623974 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xrc\" (UniqueName: \"kubernetes.io/projected/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-kube-api-access-z8xrc\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.624038 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-catalog-content\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.624066 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-utilities\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.727972 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8xrc\" (UniqueName: \"kubernetes.io/projected/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-kube-api-access-z8xrc\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.728380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-catalog-content\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.728413 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-utilities\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.728984 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-utilities\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.728994 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-catalog-content\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.746782 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8xrc\" (UniqueName: \"kubernetes.io/projected/9faaddd3-77ad-4bc9-97ce-21a824aeb1c0-kube-api-access-z8xrc\") pod \"redhat-operators-h7gpk\" (UID: \"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0\") " pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.775753 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hq59"] Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.782677 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:10 crc kubenswrapper[4792]: W0319 16:49:10.784380 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod380412c4_57ca_4428_838c_ab93fc6c71cc.slice/crio-f056bd219fbaafb30ce8f3b0b888eb1cbf9c59f90b507cdda3076a668b61b81c WatchSource:0}: Error finding container f056bd219fbaafb30ce8f3b0b888eb1cbf9c59f90b507cdda3076a668b61b81c: Status 404 returned error can't find the container with id f056bd219fbaafb30ce8f3b0b888eb1cbf9c59f90b507cdda3076a668b61b81c Mar 19 16:49:10 crc kubenswrapper[4792]: I0319 16:49:10.959775 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hq59" event={"ID":"380412c4-57ca-4428-838c-ab93fc6c71cc","Type":"ContainerStarted","Data":"f056bd219fbaafb30ce8f3b0b888eb1cbf9c59f90b507cdda3076a668b61b81c"} Mar 19 16:49:11 crc kubenswrapper[4792]: I0319 16:49:11.044131 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h7gpk"] Mar 19 16:49:11 crc kubenswrapper[4792]: W0319 16:49:11.072324 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9faaddd3_77ad_4bc9_97ce_21a824aeb1c0.slice/crio-25952a78df5c7b7f79726fe67a65660a6b789d5b60801944a784b379cac41097 WatchSource:0}: Error finding container 25952a78df5c7b7f79726fe67a65660a6b789d5b60801944a784b379cac41097: Status 404 returned error can't find the container with id 25952a78df5c7b7f79726fe67a65660a6b789d5b60801944a784b379cac41097 Mar 19 16:49:11 crc kubenswrapper[4792]: I0319 16:49:11.966485 4792 generic.go:334] "Generic (PLEG): container finished" podID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerID="e9e3de2c79fd536b3f1aaf0b32db288c093d7b1edf682f9a0e27f2802caa11c8" exitCode=0 Mar 19 16:49:11 crc kubenswrapper[4792]: I0319 16:49:11.966534 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7gpk" event={"ID":"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0","Type":"ContainerDied","Data":"e9e3de2c79fd536b3f1aaf0b32db288c093d7b1edf682f9a0e27f2802caa11c8"} Mar 19 16:49:11 crc kubenswrapper[4792]: I0319 16:49:11.966584 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7gpk" event={"ID":"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0","Type":"ContainerStarted","Data":"25952a78df5c7b7f79726fe67a65660a6b789d5b60801944a784b379cac41097"} Mar 19 16:49:11 crc kubenswrapper[4792]: I0319 16:49:11.970687 4792 generic.go:334] "Generic (PLEG): container finished" podID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerID="253b646bb43979c5628622f817a8202662f205d3b00576357154a2e309271d3d" exitCode=0 Mar 19 16:49:11 crc kubenswrapper[4792]: I0319 16:49:11.970750 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hq59" event={"ID":"380412c4-57ca-4428-838c-ab93fc6c71cc","Type":"ContainerDied","Data":"253b646bb43979c5628622f817a8202662f205d3b00576357154a2e309271d3d"} Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.655994 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gb64t"] Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.657532 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.660089 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.664493 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gb64t"] Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.754897 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6583ed-1c62-448f-98f6-6055fe84c457-utilities\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.754987 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6583ed-1c62-448f-98f6-6055fe84c457-catalog-content\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.755062 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6nfm\" (UniqueName: \"kubernetes.io/projected/7a6583ed-1c62-448f-98f6-6055fe84c457-kube-api-access-j6nfm\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.856479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6583ed-1c62-448f-98f6-6055fe84c457-utilities\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.856586 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6583ed-1c62-448f-98f6-6055fe84c457-catalog-content\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.856682 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6nfm\" (UniqueName: \"kubernetes.io/projected/7a6583ed-1c62-448f-98f6-6055fe84c457-kube-api-access-j6nfm\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.857616 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6583ed-1c62-448f-98f6-6055fe84c457-utilities\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.857963 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6583ed-1c62-448f-98f6-6055fe84c457-catalog-content\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.860177 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8m42q"] Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.861546 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.869047 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8m42q"] Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.870463 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.881333 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6nfm\" (UniqueName: \"kubernetes.io/projected/7a6583ed-1c62-448f-98f6-6055fe84c457-kube-api-access-j6nfm\") pod \"community-operators-gb64t\" (UID: \"7a6583ed-1c62-448f-98f6-6055fe84c457\") " pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.957535 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84114ace-d7fd-41a3-9fa6-87df44501023-utilities\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.957594 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84114ace-d7fd-41a3-9fa6-87df44501023-catalog-content\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.957622 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vdgt\" (UniqueName: \"kubernetes.io/projected/84114ace-d7fd-41a3-9fa6-87df44501023-kube-api-access-9vdgt\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.976378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7gpk" event={"ID":"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0","Type":"ContainerStarted","Data":"0cb0f961ef70b2d094b50592147b6d9f8e9f3f89659323b7fa939c26ba819ac7"} Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.984926 4792 generic.go:334] "Generic (PLEG): container finished" podID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerID="2ef40cb8e06951317e930b0f0b242dc78cb866cb77a375a9018390b6304436f0" exitCode=0 Mar 19 16:49:12 crc kubenswrapper[4792]: I0319 16:49:12.984967 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hq59" event={"ID":"380412c4-57ca-4428-838c-ab93fc6c71cc","Type":"ContainerDied","Data":"2ef40cb8e06951317e930b0f0b242dc78cb866cb77a375a9018390b6304436f0"} Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.016220 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.059607 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84114ace-d7fd-41a3-9fa6-87df44501023-utilities\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.060258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84114ace-d7fd-41a3-9fa6-87df44501023-catalog-content\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.060394 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vdgt\" (UniqueName: \"kubernetes.io/projected/84114ace-d7fd-41a3-9fa6-87df44501023-kube-api-access-9vdgt\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.060160 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84114ace-d7fd-41a3-9fa6-87df44501023-utilities\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.060563 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84114ace-d7fd-41a3-9fa6-87df44501023-catalog-content\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.078182 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vdgt\" (UniqueName: \"kubernetes.io/projected/84114ace-d7fd-41a3-9fa6-87df44501023-kube-api-access-9vdgt\") pod \"certified-operators-8m42q\" (UID: \"84114ace-d7fd-41a3-9fa6-87df44501023\") " pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.195307 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.216560 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gb64t"] Mar 19 16:49:13 crc kubenswrapper[4792]: W0319 16:49:13.220575 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6583ed_1c62_448f_98f6_6055fe84c457.slice/crio-5de131616fcc7477b0cc4544198333d427dfbcadac263101bafe0f6da0bd99ab WatchSource:0}: Error finding container 5de131616fcc7477b0cc4544198333d427dfbcadac263101bafe0f6da0bd99ab: Status 404 returned error can't find the container with id 5de131616fcc7477b0cc4544198333d427dfbcadac263101bafe0f6da0bd99ab Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.393285 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8m42q"] Mar 19 16:49:13 crc kubenswrapper[4792]: W0319 16:49:13.530853 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84114ace_d7fd_41a3_9fa6_87df44501023.slice/crio-61eab596f2a3120e5b9afb9f8ce47dffd42b99c98e08cd4a8da51e8ccff67495 WatchSource:0}: Error finding container 61eab596f2a3120e5b9afb9f8ce47dffd42b99c98e08cd4a8da51e8ccff67495: Status 404 returned error can't find the container with id 61eab596f2a3120e5b9afb9f8ce47dffd42b99c98e08cd4a8da51e8ccff67495 Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.993364 4792 generic.go:334] "Generic (PLEG): container finished" podID="84114ace-d7fd-41a3-9fa6-87df44501023" containerID="ec0be7771cfb90336befb5e46b5bdf03d367f11420a764e359a1fbdf59715ef0" exitCode=0 Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.993466 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m42q" event={"ID":"84114ace-d7fd-41a3-9fa6-87df44501023","Type":"ContainerDied","Data":"ec0be7771cfb90336befb5e46b5bdf03d367f11420a764e359a1fbdf59715ef0"} Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.993540 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m42q" event={"ID":"84114ace-d7fd-41a3-9fa6-87df44501023","Type":"ContainerStarted","Data":"61eab596f2a3120e5b9afb9f8ce47dffd42b99c98e08cd4a8da51e8ccff67495"} Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.995217 4792 generic.go:334] "Generic (PLEG): container finished" podID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerID="0cb0f961ef70b2d094b50592147b6d9f8e9f3f89659323b7fa939c26ba819ac7" exitCode=0 Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.995277 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7gpk" event={"ID":"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0","Type":"ContainerDied","Data":"0cb0f961ef70b2d094b50592147b6d9f8e9f3f89659323b7fa939c26ba819ac7"} Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.997579 4792 generic.go:334] "Generic (PLEG): container finished" podID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerID="0bc7af27b3928e75e9450f3ffccdd253700c0b8ca5768cecc16db755c4dbdd01" exitCode=0 Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.997657 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb64t" event={"ID":"7a6583ed-1c62-448f-98f6-6055fe84c457","Type":"ContainerDied","Data":"0bc7af27b3928e75e9450f3ffccdd253700c0b8ca5768cecc16db755c4dbdd01"} Mar 19 16:49:13 crc kubenswrapper[4792]: I0319 16:49:13.997683 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb64t" event={"ID":"7a6583ed-1c62-448f-98f6-6055fe84c457","Type":"ContainerStarted","Data":"5de131616fcc7477b0cc4544198333d427dfbcadac263101bafe0f6da0bd99ab"} Mar 19 16:49:14 crc kubenswrapper[4792]: I0319 16:49:14.013373 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hq59" event={"ID":"380412c4-57ca-4428-838c-ab93fc6c71cc","Type":"ContainerStarted","Data":"867888f639ec2e37428545bd2bef0d4184089bfea99da537baadc66010fd6636"} Mar 19 16:49:14 crc kubenswrapper[4792]: I0319 16:49:14.073177 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5hq59" podStartSLOduration=2.6798674350000002 podStartE2EDuration="4.073156044s" podCreationTimestamp="2026-03-19 16:49:10 +0000 UTC" firstStartedPulling="2026-03-19 16:49:11.972136413 +0000 UTC m=+515.118193953" lastFinishedPulling="2026-03-19 16:49:13.365425022 +0000 UTC m=+516.511482562" observedRunningTime="2026-03-19 16:49:14.071260222 +0000 UTC m=+517.217317762" watchObservedRunningTime="2026-03-19 16:49:14.073156044 +0000 UTC m=+517.219213584" Mar 19 16:49:15 crc kubenswrapper[4792]: I0319 16:49:15.021363 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7gpk" event={"ID":"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0","Type":"ContainerStarted","Data":"4d4d1636c09e28e298739d7ff2f0be74f0ff340947ff7fb9fb933d125ce5fe9c"} Mar 19 16:49:15 crc kubenswrapper[4792]: I0319 16:49:15.024444 4792 generic.go:334] "Generic (PLEG): container finished" podID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerID="db76cd1bd823cc6df72b5e8bb18d73a97307f01d90b5c6a9e9e177d2d91cf762" exitCode=0 Mar 19 16:49:15 crc kubenswrapper[4792]: I0319 16:49:15.024507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb64t" event={"ID":"7a6583ed-1c62-448f-98f6-6055fe84c457","Type":"ContainerDied","Data":"db76cd1bd823cc6df72b5e8bb18d73a97307f01d90b5c6a9e9e177d2d91cf762"} Mar 19 16:49:15 crc kubenswrapper[4792]: I0319 16:49:15.027471 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m42q" event={"ID":"84114ace-d7fd-41a3-9fa6-87df44501023","Type":"ContainerStarted","Data":"dcec4ddaa4bbc12d6651845642ff95e37e0a1bca750687ea872826045e8d1d3a"} Mar 19 16:49:15 crc kubenswrapper[4792]: I0319 16:49:15.044148 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h7gpk" podStartSLOduration=2.505798844 podStartE2EDuration="5.044131883s" podCreationTimestamp="2026-03-19 16:49:10 +0000 UTC" firstStartedPulling="2026-03-19 16:49:11.968365748 +0000 UTC m=+515.114423298" lastFinishedPulling="2026-03-19 16:49:14.506698797 +0000 UTC m=+517.652756337" observedRunningTime="2026-03-19 16:49:15.04186625 +0000 UTC m=+518.187923780" watchObservedRunningTime="2026-03-19 16:49:15.044131883 +0000 UTC m=+518.190189433" Mar 19 16:49:16 crc kubenswrapper[4792]: I0319 16:49:16.036636 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb64t" event={"ID":"7a6583ed-1c62-448f-98f6-6055fe84c457","Type":"ContainerStarted","Data":"a697f1c3f685b693a9c48ace845375afce8c47b387ddc693f7405cf593a8311c"} Mar 19 16:49:16 crc kubenswrapper[4792]: I0319 16:49:16.038920 4792 generic.go:334] "Generic (PLEG): container finished" podID="84114ace-d7fd-41a3-9fa6-87df44501023" containerID="dcec4ddaa4bbc12d6651845642ff95e37e0a1bca750687ea872826045e8d1d3a" exitCode=0 Mar 19 16:49:16 crc kubenswrapper[4792]: I0319 16:49:16.038990 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m42q" event={"ID":"84114ace-d7fd-41a3-9fa6-87df44501023","Type":"ContainerDied","Data":"dcec4ddaa4bbc12d6651845642ff95e37e0a1bca750687ea872826045e8d1d3a"} Mar 19 16:49:16 crc kubenswrapper[4792]: I0319 16:49:16.066166 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gb64t" podStartSLOduration=2.658786989 podStartE2EDuration="4.066148599s" podCreationTimestamp="2026-03-19 16:49:12 +0000 UTC" firstStartedPulling="2026-03-19 16:49:14.001164846 +0000 UTC m=+517.147222466" lastFinishedPulling="2026-03-19 16:49:15.408526526 +0000 UTC m=+518.554584076" observedRunningTime="2026-03-19 16:49:16.060747328 +0000 UTC m=+519.206804888" watchObservedRunningTime="2026-03-19 16:49:16.066148599 +0000 UTC m=+519.212206149" Mar 19 16:49:17 crc kubenswrapper[4792]: I0319 16:49:17.046067 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m42q" event={"ID":"84114ace-d7fd-41a3-9fa6-87df44501023","Type":"ContainerStarted","Data":"47145a7d4546a92d249ce3652000fec4d752e011c0b7b0713ab26e8050917311"} Mar 19 16:49:17 crc kubenswrapper[4792]: I0319 16:49:17.070450 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8m42q" podStartSLOduration=2.64784916 podStartE2EDuration="5.070433038s" podCreationTimestamp="2026-03-19 16:49:12 +0000 UTC" firstStartedPulling="2026-03-19 16:49:13.996336632 +0000 UTC m=+517.142394182" lastFinishedPulling="2026-03-19 16:49:16.41892052 +0000 UTC m=+519.564978060" observedRunningTime="2026-03-19 16:49:17.068458353 +0000 UTC m=+520.214515903" watchObservedRunningTime="2026-03-19 16:49:17.070433038 +0000 UTC m=+520.216490588" Mar 19 16:49:20 crc kubenswrapper[4792]: I0319 16:49:20.577037 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:20 crc kubenswrapper[4792]: I0319 16:49:20.577412 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:20 crc kubenswrapper[4792]: I0319 16:49:20.629497 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:20 crc kubenswrapper[4792]: I0319 16:49:20.783025 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:20 crc kubenswrapper[4792]: I0319 16:49:20.783075 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:21 crc kubenswrapper[4792]: I0319 16:49:21.126206 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 16:49:21 crc kubenswrapper[4792]: I0319 16:49:21.821305 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 16:49:21 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 16:49:21 crc kubenswrapper[4792]: > Mar 19 16:49:23 crc kubenswrapper[4792]: I0319 16:49:23.017133 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:23 crc kubenswrapper[4792]: I0319 16:49:23.018095 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:23 crc kubenswrapper[4792]: I0319 16:49:23.061443 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:23 crc kubenswrapper[4792]: I0319 16:49:23.131619 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gb64t" Mar 19 16:49:23 crc kubenswrapper[4792]: I0319 16:49:23.197175 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:23 crc kubenswrapper[4792]: I0319 16:49:23.197242 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:23 crc kubenswrapper[4792]: I0319 16:49:23.235425 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:24 crc kubenswrapper[4792]: I0319 16:49:24.128034 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 16:49:28 crc kubenswrapper[4792]: I0319 16:49:28.548336 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" podUID="d8a18336-1f12-45bf-a9e0-0c3106a4abe1" containerName="registry" containerID="cri-o://f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7" gracePeriod=30 Mar 19 16:49:28 crc kubenswrapper[4792]: I0319 16:49:28.921424 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080053 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-trusted-ca\") pod \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080131 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-ca-trust-extracted\") pod \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080290 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080318 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-certificates\") pod \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080340 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmdx4\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-kube-api-access-fmdx4\") pod \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080391 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-installation-pull-secrets\") pod \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080451 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-tls\") pod \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080499 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-bound-sa-token\") pod \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\" (UID: \"d8a18336-1f12-45bf-a9e0-0c3106a4abe1\") " Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.080974 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d8a18336-1f12-45bf-a9e0-0c3106a4abe1" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.081123 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d8a18336-1f12-45bf-a9e0-0c3106a4abe1" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.086040 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d8a18336-1f12-45bf-a9e0-0c3106a4abe1" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.086104 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d8a18336-1f12-45bf-a9e0-0c3106a4abe1" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.088152 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-kube-api-access-fmdx4" (OuterVolumeSpecName: "kube-api-access-fmdx4") pod "d8a18336-1f12-45bf-a9e0-0c3106a4abe1" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1"). InnerVolumeSpecName "kube-api-access-fmdx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.089013 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d8a18336-1f12-45bf-a9e0-0c3106a4abe1" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.092038 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d8a18336-1f12-45bf-a9e0-0c3106a4abe1" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.097292 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d8a18336-1f12-45bf-a9e0-0c3106a4abe1" (UID: "d8a18336-1f12-45bf-a9e0-0c3106a4abe1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.118198 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8a18336-1f12-45bf-a9e0-0c3106a4abe1" containerID="f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7" exitCode=0 Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.118249 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" event={"ID":"d8a18336-1f12-45bf-a9e0-0c3106a4abe1","Type":"ContainerDied","Data":"f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7"} Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.118280 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" event={"ID":"d8a18336-1f12-45bf-a9e0-0c3106a4abe1","Type":"ContainerDied","Data":"c3315b7cadc34c174bc5f3b94ecd97439de91f694bbeb4462fd4d639dba172f2"} Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.118301 4792 scope.go:117] "RemoveContainer" containerID="f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.118476 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fthfn" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.143668 4792 scope.go:117] "RemoveContainer" containerID="f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7" Mar 19 16:49:29 crc kubenswrapper[4792]: E0319 16:49:29.144195 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7\": container with ID starting with f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7 not found: ID does not exist" containerID="f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.144251 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7"} err="failed to get container status \"f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7\": rpc error: code = NotFound desc = could not find container \"f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7\": container with ID starting with f7ba67aba1163a42f273763c394f4e34b74e6b491ba4d10eacf2629aa6ce2ce7 not found: ID does not exist" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.158079 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fthfn"] Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.166117 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fthfn"] Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.181413 4792 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.181442 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmdx4\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-kube-api-access-fmdx4\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.181454 4792 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.181466 4792 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.181479 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.181488 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.181498 4792 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a18336-1f12-45bf-a9e0-0c3106a4abe1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 19 16:49:29 crc kubenswrapper[4792]: I0319 16:49:29.747434 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a18336-1f12-45bf-a9e0-0c3106a4abe1" path="/var/lib/kubelet/pods/d8a18336-1f12-45bf-a9e0-0c3106a4abe1/volumes" Mar 19 16:49:30 crc kubenswrapper[4792]: I0319 16:49:30.821367 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:49:30 crc kubenswrapper[4792]: I0319 16:49:30.863917 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.141686 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565650-xpvhz"] Mar 19 16:50:00 crc kubenswrapper[4792]: E0319 16:50:00.142553 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a18336-1f12-45bf-a9e0-0c3106a4abe1" containerName="registry" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.142574 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a18336-1f12-45bf-a9e0-0c3106a4abe1" containerName="registry" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.142737 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a18336-1f12-45bf-a9e0-0c3106a4abe1" containerName="registry" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.143516 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565650-xpvhz" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.147908 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.147997 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.148514 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.152057 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565650-xpvhz"] Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.292441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprt8\" (UniqueName: \"kubernetes.io/projected/0372387e-f9c8-4045-8bca-c878cba6b38b-kube-api-access-fprt8\") pod \"auto-csr-approver-29565650-xpvhz\" (UID: \"0372387e-f9c8-4045-8bca-c878cba6b38b\") " pod="openshift-infra/auto-csr-approver-29565650-xpvhz" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.394369 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fprt8\" (UniqueName: \"kubernetes.io/projected/0372387e-f9c8-4045-8bca-c878cba6b38b-kube-api-access-fprt8\") pod \"auto-csr-approver-29565650-xpvhz\" (UID: \"0372387e-f9c8-4045-8bca-c878cba6b38b\") " pod="openshift-infra/auto-csr-approver-29565650-xpvhz" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.427816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fprt8\" (UniqueName: \"kubernetes.io/projected/0372387e-f9c8-4045-8bca-c878cba6b38b-kube-api-access-fprt8\") pod \"auto-csr-approver-29565650-xpvhz\" (UID: \"0372387e-f9c8-4045-8bca-c878cba6b38b\") " pod="openshift-infra/auto-csr-approver-29565650-xpvhz" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.478655 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565650-xpvhz" Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.714128 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565650-xpvhz"] Mar 19 16:50:00 crc kubenswrapper[4792]: I0319 16:50:00.727941 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 16:50:01 crc kubenswrapper[4792]: I0319 16:50:01.294180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565650-xpvhz" event={"ID":"0372387e-f9c8-4045-8bca-c878cba6b38b","Type":"ContainerStarted","Data":"a26da6ea58539b58a95b22037a23e6bc76c3262af695a564e1d6988a5ef918e3"} Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.145956 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4"] Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.146957 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.149554 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.149663 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.149756 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.149878 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.149984 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.165267 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4"] Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.303497 4792 generic.go:334] "Generic (PLEG): container finished" podID="0372387e-f9c8-4045-8bca-c878cba6b38b" containerID="4bc1ba345466e133470c2f62705e1feb549dc29da0278aa9bbc39d2ef7978c03" exitCode=0 Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.303537 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565650-xpvhz" event={"ID":"0372387e-f9c8-4045-8bca-c878cba6b38b","Type":"ContainerDied","Data":"4bc1ba345466e133470c2f62705e1feb549dc29da0278aa9bbc39d2ef7978c03"} Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.316728 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkf9j\" (UniqueName: \"kubernetes.io/projected/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-kube-api-access-hkf9j\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.316817 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.316889 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.418073 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.418125 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.418228 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkf9j\" (UniqueName: \"kubernetes.io/projected/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-kube-api-access-hkf9j\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.420619 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.432403 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.445758 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkf9j\" (UniqueName: \"kubernetes.io/projected/8bdc8d22-e57c-49c5-95a8-b01e20161e3b-kube-api-access-hkf9j\") pod \"cluster-monitoring-operator-6d5b84845-cjzh4\" (UID: \"8bdc8d22-e57c-49c5-95a8-b01e20161e3b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:02 crc kubenswrapper[4792]: I0319 16:50:02.463479 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" Mar 19 16:50:03 crc kubenswrapper[4792]: I0319 16:50:02.892251 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4"] Mar 19 16:50:03 crc kubenswrapper[4792]: W0319 16:50:02.906986 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bdc8d22_e57c_49c5_95a8_b01e20161e3b.slice/crio-d9060bf1d0163eb4941a45150c960a1678d7e15a48bff2d41e8f9311c3a60c01 WatchSource:0}: Error finding container d9060bf1d0163eb4941a45150c960a1678d7e15a48bff2d41e8f9311c3a60c01: Status 404 returned error can't find the container with id d9060bf1d0163eb4941a45150c960a1678d7e15a48bff2d41e8f9311c3a60c01 Mar 19 16:50:03 crc kubenswrapper[4792]: I0319 16:50:03.310126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" event={"ID":"8bdc8d22-e57c-49c5-95a8-b01e20161e3b","Type":"ContainerStarted","Data":"d9060bf1d0163eb4941a45150c960a1678d7e15a48bff2d41e8f9311c3a60c01"} Mar 19 16:50:03 crc kubenswrapper[4792]: I0319 16:50:03.498249 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565650-xpvhz" Mar 19 16:50:03 crc kubenswrapper[4792]: I0319 16:50:03.632902 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fprt8\" (UniqueName: \"kubernetes.io/projected/0372387e-f9c8-4045-8bca-c878cba6b38b-kube-api-access-fprt8\") pod \"0372387e-f9c8-4045-8bca-c878cba6b38b\" (UID: \"0372387e-f9c8-4045-8bca-c878cba6b38b\") " Mar 19 16:50:03 crc kubenswrapper[4792]: I0319 16:50:03.638252 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0372387e-f9c8-4045-8bca-c878cba6b38b-kube-api-access-fprt8" (OuterVolumeSpecName: "kube-api-access-fprt8") pod "0372387e-f9c8-4045-8bca-c878cba6b38b" (UID: "0372387e-f9c8-4045-8bca-c878cba6b38b"). InnerVolumeSpecName "kube-api-access-fprt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:50:03 crc kubenswrapper[4792]: I0319 16:50:03.734529 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fprt8\" (UniqueName: \"kubernetes.io/projected/0372387e-f9c8-4045-8bca-c878cba6b38b-kube-api-access-fprt8\") on node \"crc\" DevicePath \"\"" Mar 19 16:50:04 crc kubenswrapper[4792]: I0319 16:50:04.316985 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565650-xpvhz" event={"ID":"0372387e-f9c8-4045-8bca-c878cba6b38b","Type":"ContainerDied","Data":"a26da6ea58539b58a95b22037a23e6bc76c3262af695a564e1d6988a5ef918e3"} Mar 19 16:50:04 crc kubenswrapper[4792]: I0319 16:50:04.317302 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a26da6ea58539b58a95b22037a23e6bc76c3262af695a564e1d6988a5ef918e3" Mar 19 16:50:04 crc kubenswrapper[4792]: I0319 16:50:04.317366 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565650-xpvhz" Mar 19 16:50:04 crc kubenswrapper[4792]: I0319 16:50:04.563479 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565644-gg5p9"] Mar 19 16:50:04 crc kubenswrapper[4792]: I0319 16:50:04.567914 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565644-gg5p9"] Mar 19 16:50:05 crc kubenswrapper[4792]: I0319 16:50:05.746473 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422112a2-a6c2-4d09-aaeb-e4f9924ed96e" path="/var/lib/kubelet/pods/422112a2-a6c2-4d09-aaeb-e4f9924ed96e/volumes" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.114652 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw"] Mar 19 16:50:06 crc kubenswrapper[4792]: E0319 16:50:06.115032 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0372387e-f9c8-4045-8bca-c878cba6b38b" containerName="oc" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.115099 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0372387e-f9c8-4045-8bca-c878cba6b38b" containerName="oc" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.115296 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0372387e-f9c8-4045-8bca-c878cba6b38b" containerName="oc" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.115794 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.120526 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw"] Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.123605 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-pcfcb" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.124720 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.167523 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/89a3cb59-c0fe-426a-beb3-bf0d77ba0530-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2r6xw\" (UID: \"89a3cb59-c0fe-426a-beb3-bf0d77ba0530\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.268382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/89a3cb59-c0fe-426a-beb3-bf0d77ba0530-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2r6xw\" (UID: \"89a3cb59-c0fe-426a-beb3-bf0d77ba0530\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.273550 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/89a3cb59-c0fe-426a-beb3-bf0d77ba0530-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2r6xw\" (UID: \"89a3cb59-c0fe-426a-beb3-bf0d77ba0530\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.327161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" event={"ID":"8bdc8d22-e57c-49c5-95a8-b01e20161e3b","Type":"ContainerStarted","Data":"3d2d223dc6649ca0bf9f18b824a0f382295746a63a93fb4a8df5e8130e802a91"} Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.342275 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-cjzh4" podStartSLOduration=1.735287724 podStartE2EDuration="4.342253458s" podCreationTimestamp="2026-03-19 16:50:02 +0000 UTC" firstStartedPulling="2026-03-19 16:50:02.908683447 +0000 UTC m=+566.054740997" lastFinishedPulling="2026-03-19 16:50:05.515649191 +0000 UTC m=+568.661706731" observedRunningTime="2026-03-19 16:50:06.339124802 +0000 UTC m=+569.485182342" watchObservedRunningTime="2026-03-19 16:50:06.342253458 +0000 UTC m=+569.488311038" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.433235 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 16:50:06 crc kubenswrapper[4792]: I0319 16:50:06.639415 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw"] Mar 19 16:50:07 crc kubenswrapper[4792]: I0319 16:50:07.334778 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" event={"ID":"89a3cb59-c0fe-426a-beb3-bf0d77ba0530","Type":"ContainerStarted","Data":"75a29c1b194b12fda5890f2a8c3d2684ecd987f506783058933c3ecc18668e2e"} Mar 19 16:50:08 crc kubenswrapper[4792]: I0319 16:50:08.343035 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" event={"ID":"89a3cb59-c0fe-426a-beb3-bf0d77ba0530","Type":"ContainerStarted","Data":"c488690842c3cea3e79b5fd60ec46f8e5bfc0a752f44c113987a29df8199fe59"} Mar 19 16:50:08 crc kubenswrapper[4792]: I0319 16:50:08.343387 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 16:50:08 crc kubenswrapper[4792]: I0319 16:50:08.347304 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 16:50:08 crc kubenswrapper[4792]: I0319 16:50:08.360456 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podStartSLOduration=1.10289993 podStartE2EDuration="2.360433972s" podCreationTimestamp="2026-03-19 16:50:06 +0000 UTC" firstStartedPulling="2026-03-19 16:50:06.658517955 +0000 UTC m=+569.804575495" lastFinishedPulling="2026-03-19 16:50:07.916051997 +0000 UTC m=+571.062109537" observedRunningTime="2026-03-19 16:50:08.355178927 +0000 UTC m=+571.501236477" watchObservedRunningTime="2026-03-19 16:50:08.360433972 +0000 UTC m=+571.506491542" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.217376 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-n5c2k"] Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.218449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.222157 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.222166 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-8zd4m" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.222315 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.222167 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.232283 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-n5c2k"] Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.404397 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.404546 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-metrics-client-ca\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.404646 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqv2f\" (UniqueName: \"kubernetes.io/projected/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-kube-api-access-wqv2f\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.404733 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.506530 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.506711 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-metrics-client-ca\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.506782 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqv2f\" (UniqueName: \"kubernetes.io/projected/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-kube-api-access-wqv2f\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.506917 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.508569 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-metrics-client-ca\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.518068 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.518071 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.523226 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqv2f\" (UniqueName: \"kubernetes.io/projected/c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9-kube-api-access-wqv2f\") pod \"prometheus-operator-db54df47d-n5c2k\" (UID: \"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9\") " pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.536104 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" Mar 19 16:50:09 crc kubenswrapper[4792]: I0319 16:50:09.940363 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-n5c2k"] Mar 19 16:50:09 crc kubenswrapper[4792]: W0319 16:50:09.943438 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc46a124e_b7eb_4b66_a3f7_b72f17d0c9f9.slice/crio-b6e6d99970b2d860c89ddfb8b9bf9d592ef3fb0e94e850276492f9ed94b95a91 WatchSource:0}: Error finding container b6e6d99970b2d860c89ddfb8b9bf9d592ef3fb0e94e850276492f9ed94b95a91: Status 404 returned error can't find the container with id b6e6d99970b2d860c89ddfb8b9bf9d592ef3fb0e94e850276492f9ed94b95a91 Mar 19 16:50:10 crc kubenswrapper[4792]: I0319 16:50:10.355406 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" event={"ID":"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9","Type":"ContainerStarted","Data":"b6e6d99970b2d860c89ddfb8b9bf9d592ef3fb0e94e850276492f9ed94b95a91"} Mar 19 16:50:13 crc kubenswrapper[4792]: I0319 16:50:13.372726 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" event={"ID":"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9","Type":"ContainerStarted","Data":"7bc6515b3599a0dcd3bd170223c69641208f58883506cc942531d37bff634e80"} Mar 19 16:50:13 crc kubenswrapper[4792]: I0319 16:50:13.373429 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" event={"ID":"c46a124e-b7eb-4b66-a3f7-b72f17d0c9f9","Type":"ContainerStarted","Data":"7adf90f20e7f236e8559893826e4d4f0e4822e5253e7c390652e79ee6d65ad53"} Mar 19 16:50:13 crc kubenswrapper[4792]: I0319 16:50:13.393727 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-n5c2k" podStartSLOduration=1.933704628 podStartE2EDuration="4.393706615s" podCreationTimestamp="2026-03-19 16:50:09 +0000 UTC" firstStartedPulling="2026-03-19 16:50:09.945878461 +0000 UTC m=+573.091936011" lastFinishedPulling="2026-03-19 16:50:12.405880458 +0000 UTC m=+575.551937998" observedRunningTime="2026-03-19 16:50:13.390222179 +0000 UTC m=+576.536279709" watchObservedRunningTime="2026-03-19 16:50:13.393706615 +0000 UTC m=+576.539764155" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.565803 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs"] Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.567573 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.568773 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r74bq"] Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.569762 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.573647 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.573887 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.574143 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.574369 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-fwbp9" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.574492 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.574686 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.578338 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-2jjn6" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.582294 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r74bq"] Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.587880 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.587947 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6g66\" (UniqueName: \"kubernetes.io/projected/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-kube-api-access-c6g66\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.587974 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/59572dd5-9b17-4cfa-bbb3-b7edc113d119-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.587992 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59572dd5-9b17-4cfa-bbb3-b7edc113d119-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.588027 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.588048 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.588066 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.588089 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rs2p\" (UniqueName: \"kubernetes.io/projected/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-api-access-9rs2p\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.588110 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.588130 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.589946 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs"] Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.604532 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-zmnl5"] Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.605601 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.618386 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.618584 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.618738 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-rnw47" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.688983 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689043 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689082 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689144 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6g66\" (UniqueName: \"kubernetes.io/projected/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-kube-api-access-c6g66\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689176 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/59572dd5-9b17-4cfa-bbb3-b7edc113d119-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59572dd5-9b17-4cfa-bbb3-b7edc113d119-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689237 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-sys\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689265 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqlr\" (UniqueName: \"kubernetes.io/projected/79624649-d591-41d1-ba82-9de4d21bc0fb-kube-api-access-zsqlr\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689293 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689322 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689346 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-wtmp\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689396 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-textfile\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689421 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-tls\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-root\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689460 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79624649-d591-41d1-ba82-9de4d21bc0fb-metrics-client-ca\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689480 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.689507 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rs2p\" (UniqueName: \"kubernetes.io/projected/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-api-access-9rs2p\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.691641 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/59572dd5-9b17-4cfa-bbb3-b7edc113d119-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.692082 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.692210 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: E0319 16:50:15.692304 4792 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 19 16:50:15 crc kubenswrapper[4792]: E0319 16:50:15.692340 4792 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 19 16:50:15 crc kubenswrapper[4792]: E0319 16:50:15.692351 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-tls podName:59572dd5-9b17-4cfa-bbb3-b7edc113d119 nodeName:}" failed. No retries permitted until 2026-03-19 16:50:16.192337479 +0000 UTC m=+579.338395009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-hc2bs" (UID: "59572dd5-9b17-4cfa-bbb3-b7edc113d119") : secret "kube-state-metrics-tls" not found Mar 19 16:50:15 crc kubenswrapper[4792]: E0319 16:50:15.692383 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-tls podName:5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9 nodeName:}" failed. No retries permitted until 2026-03-19 16:50:16.19236826 +0000 UTC m=+579.338425800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-r74bq" (UID: "5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9") : secret "openshift-state-metrics-tls" not found Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.692772 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59572dd5-9b17-4cfa-bbb3-b7edc113d119-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.695940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.703390 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.708782 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rs2p\" (UniqueName: \"kubernetes.io/projected/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-api-access-9rs2p\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.721940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6g66\" (UniqueName: \"kubernetes.io/projected/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-kube-api-access-c6g66\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790231 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-sys\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790279 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqlr\" (UniqueName: \"kubernetes.io/projected/79624649-d591-41d1-ba82-9de4d21bc0fb-kube-api-access-zsqlr\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790317 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-wtmp\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790333 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-textfile\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-tls\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790364 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-root\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-sys\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790388 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79624649-d591-41d1-ba82-9de4d21bc0fb-metrics-client-ca\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-root\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790756 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-wtmp\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.790861 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.791045 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-textfile\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.791177 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79624649-d591-41d1-ba82-9de4d21bc0fb-metrics-client-ca\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.794040 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-tls\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.807766 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79624649-d591-41d1-ba82-9de4d21bc0fb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.818409 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqlr\" (UniqueName: \"kubernetes.io/projected/79624649-d591-41d1-ba82-9de4d21bc0fb-kube-api-access-zsqlr\") pod \"node-exporter-zmnl5\" (UID: \"79624649-d591-41d1-ba82-9de4d21bc0fb\") " pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: I0319 16:50:15.918509 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zmnl5" Mar 19 16:50:15 crc kubenswrapper[4792]: W0319 16:50:15.934591 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79624649_d591_41d1_ba82_9de4d21bc0fb.slice/crio-c59646edd0b5822858e3990fa15c63f2ff208cfd6bc060d9faf67dd830acb577 WatchSource:0}: Error finding container c59646edd0b5822858e3990fa15c63f2ff208cfd6bc060d9faf67dd830acb577: Status 404 returned error can't find the container with id c59646edd0b5822858e3990fa15c63f2ff208cfd6bc060d9faf67dd830acb577 Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.197273 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.197347 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.201331 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-r74bq\" (UID: \"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.201427 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59572dd5-9b17-4cfa-bbb3-b7edc113d119-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hc2bs\" (UID: \"59572dd5-9b17-4cfa-bbb3-b7edc113d119\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.202002 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.391887 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zmnl5" event={"ID":"79624649-d591-41d1-ba82-9de4d21bc0fb","Type":"ContainerStarted","Data":"c59646edd0b5822858e3990fa15c63f2ff208cfd6bc060d9faf67dd830acb577"} Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.470106 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-r74bq"] Mar 19 16:50:16 crc kubenswrapper[4792]: W0319 16:50:16.477303 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a5fe79c_a9e4_4538_a90f_423a1ddbc8d9.slice/crio-76f197bf1c3b732c98f3fcb759f25b94881d2fd69aace2f2bfcedb8b6da259f6 WatchSource:0}: Error finding container 76f197bf1c3b732c98f3fcb759f25b94881d2fd69aace2f2bfcedb8b6da259f6: Status 404 returned error can't find the container with id 76f197bf1c3b732c98f3fcb759f25b94881d2fd69aace2f2bfcedb8b6da259f6 Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.488867 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.676869 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.678987 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.681919 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.682289 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.704392 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.704601 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.704741 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-c8hwt" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.705358 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.705682 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.706579 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.709867 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.710189 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805056 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805097 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9e6c044c-73fb-462b-a21d-127ed782c44f-config-out\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805115 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9e6c044c-73fb-462b-a21d-127ed782c44f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805176 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5kgd\" (UniqueName: \"kubernetes.io/projected/9e6c044c-73fb-462b-a21d-127ed782c44f-kube-api-access-m5kgd\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805194 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-config-volume\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805211 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-web-config\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805230 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e6c044c-73fb-462b-a21d-127ed782c44f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9e6c044c-73fb-462b-a21d-127ed782c44f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805264 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805281 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.805316 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9e6c044c-73fb-462b-a21d-127ed782c44f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.906686 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.906732 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.906805 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9e6c044c-73fb-462b-a21d-127ed782c44f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.906831 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.906868 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9e6c044c-73fb-462b-a21d-127ed782c44f-config-out\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.906884 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9e6c044c-73fb-462b-a21d-127ed782c44f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.906899 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.906962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5kgd\" (UniqueName: \"kubernetes.io/projected/9e6c044c-73fb-462b-a21d-127ed782c44f-kube-api-access-m5kgd\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.907004 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-config-volume\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.907026 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-web-config\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.907046 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e6c044c-73fb-462b-a21d-127ed782c44f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.907072 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9e6c044c-73fb-462b-a21d-127ed782c44f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.907542 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9e6c044c-73fb-462b-a21d-127ed782c44f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.908108 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9e6c044c-73fb-462b-a21d-127ed782c44f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.908315 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e6c044c-73fb-462b-a21d-127ed782c44f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.913769 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9e6c044c-73fb-462b-a21d-127ed782c44f-config-out\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.913883 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9e6c044c-73fb-462b-a21d-127ed782c44f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.913949 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.915949 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.915959 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-config-volume\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.916730 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.916991 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.927646 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5kgd\" (UniqueName: \"kubernetes.io/projected/9e6c044c-73fb-462b-a21d-127ed782c44f-kube-api-access-m5kgd\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.931491 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9e6c044c-73fb-462b-a21d-127ed782c44f-web-config\") pod \"alertmanager-main-0\" (UID: \"9e6c044c-73fb-462b-a21d-127ed782c44f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:16 crc kubenswrapper[4792]: I0319 16:50:16.946447 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs"] Mar 19 16:50:16 crc kubenswrapper[4792]: W0319 16:50:16.967296 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59572dd5_9b17_4cfa_bbb3_b7edc113d119.slice/crio-dccb84b77edd5fb5e3033ef2fa3c93d1db7d3712a4447a61f05f0488c0581797 WatchSource:0}: Error finding container dccb84b77edd5fb5e3033ef2fa3c93d1db7d3712a4447a61f05f0488c0581797: Status 404 returned error can't find the container with id dccb84b77edd5fb5e3033ef2fa3c93d1db7d3712a4447a61f05f0488c0581797 Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.064447 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.411308 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" event={"ID":"59572dd5-9b17-4cfa-bbb3-b7edc113d119","Type":"ContainerStarted","Data":"dccb84b77edd5fb5e3033ef2fa3c93d1db7d3712a4447a61f05f0488c0581797"} Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.416005 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" event={"ID":"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9","Type":"ContainerStarted","Data":"e211228cf31824ee5ed3ffffc5ea9ec9754a9a0135438e8c12b3098f6054962c"} Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.416048 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" event={"ID":"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9","Type":"ContainerStarted","Data":"0086f3b03eefc4744c179ba09d3df57621b69fe24f9a3cacc0eeaf22d18c2608"} Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.416058 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" event={"ID":"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9","Type":"ContainerStarted","Data":"76f197bf1c3b732c98f3fcb759f25b94881d2fd69aace2f2bfcedb8b6da259f6"} Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.523656 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 16:50:17 crc kubenswrapper[4792]: W0319 16:50:17.532294 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e6c044c_73fb_462b_a21d_127ed782c44f.slice/crio-d195248c2a42a13bf9dd83070c93492fa501809310f51a9a5a4e1f6aa28542ab WatchSource:0}: Error finding container d195248c2a42a13bf9dd83070c93492fa501809310f51a9a5a4e1f6aa28542ab: Status 404 returned error can't find the container with id d195248c2a42a13bf9dd83070c93492fa501809310f51a9a5a4e1f6aa28542ab Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.663065 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-87649d4fc-vf7hh"] Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.669260 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.678686 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-87649d4fc-vf7hh"] Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.681391 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.681745 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-80p5t5v75gqjb" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.682038 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-hrclt" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.682040 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.682159 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.682253 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.682345 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.715566 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7mb\" (UniqueName: \"kubernetes.io/projected/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-kube-api-access-hq7mb\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.715685 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-metrics-client-ca\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.715748 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-tls\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.715891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.715930 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.715965 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-grpc-tls\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.716020 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.716062 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.817452 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.817506 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.817531 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-grpc-tls\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.817547 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.817895 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.817942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7mb\" (UniqueName: \"kubernetes.io/projected/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-kube-api-access-hq7mb\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.817968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-metrics-client-ca\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.818003 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-tls\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.822198 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-metrics-client-ca\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.823373 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.824308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-tls\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.825050 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.825724 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.827241 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.833527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7mb\" (UniqueName: \"kubernetes.io/projected/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-kube-api-access-hq7mb\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:17 crc kubenswrapper[4792]: I0319 16:50:17.841677 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e647560e-f7fe-4bb2-bf05-80a88cf1c66a-secret-grpc-tls\") pod \"thanos-querier-87649d4fc-vf7hh\" (UID: \"e647560e-f7fe-4bb2-bf05-80a88cf1c66a\") " pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:18 crc kubenswrapper[4792]: I0319 16:50:18.023326 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:18 crc kubenswrapper[4792]: I0319 16:50:18.429525 4792 generic.go:334] "Generic (PLEG): container finished" podID="79624649-d591-41d1-ba82-9de4d21bc0fb" containerID="1c5e5b9d3f56ab13cf11e385abe6067b086fb1867415e9c1831ecc1ed73572aa" exitCode=0 Mar 19 16:50:18 crc kubenswrapper[4792]: I0319 16:50:18.429608 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zmnl5" event={"ID":"79624649-d591-41d1-ba82-9de4d21bc0fb","Type":"ContainerDied","Data":"1c5e5b9d3f56ab13cf11e385abe6067b086fb1867415e9c1831ecc1ed73572aa"} Mar 19 16:50:18 crc kubenswrapper[4792]: I0319 16:50:18.431118 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9e6c044c-73fb-462b-a21d-127ed782c44f","Type":"ContainerStarted","Data":"d195248c2a42a13bf9dd83070c93492fa501809310f51a9a5a4e1f6aa28542ab"} Mar 19 16:50:18 crc kubenswrapper[4792]: I0319 16:50:18.981810 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-87649d4fc-vf7hh"] Mar 19 16:50:19 crc kubenswrapper[4792]: W0319 16:50:19.192249 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode647560e_f7fe_4bb2_bf05_80a88cf1c66a.slice/crio-955c0b83c58451a146f3ee8e1d649ebe87ad649c79736fbd00d6fe8e637c966f WatchSource:0}: Error finding container 955c0b83c58451a146f3ee8e1d649ebe87ad649c79736fbd00d6fe8e637c966f: Status 404 returned error can't find the container with id 955c0b83c58451a146f3ee8e1d649ebe87ad649c79736fbd00d6fe8e637c966f Mar 19 16:50:19 crc kubenswrapper[4792]: I0319 16:50:19.447003 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zmnl5" event={"ID":"79624649-d591-41d1-ba82-9de4d21bc0fb","Type":"ContainerStarted","Data":"d4875331f2761b4e689e5e7c5db7e8b7932eabfcd23d24e1eb21f4c36591f436"} Mar 19 16:50:19 crc kubenswrapper[4792]: I0319 16:50:19.450382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" event={"ID":"5a5fe79c-a9e4-4538-a90f-423a1ddbc8d9","Type":"ContainerStarted","Data":"f071b1e655ad612fe0fb862a6e3db0ddbf2602be3739c925038921789fe58de3"} Mar 19 16:50:19 crc kubenswrapper[4792]: I0319 16:50:19.454598 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" event={"ID":"59572dd5-9b17-4cfa-bbb3-b7edc113d119","Type":"ContainerStarted","Data":"4f2be9d39b0d0f9660502126e535c225cbd93c2bb67dc6b1bb4554cef0c219e8"} Mar 19 16:50:19 crc kubenswrapper[4792]: I0319 16:50:19.456456 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" event={"ID":"e647560e-f7fe-4bb2-bf05-80a88cf1c66a","Type":"ContainerStarted","Data":"955c0b83c58451a146f3ee8e1d649ebe87ad649c79736fbd00d6fe8e637c966f"} Mar 19 16:50:19 crc kubenswrapper[4792]: I0319 16:50:19.475129 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-r74bq" podStartSLOduration=2.514398093 podStartE2EDuration="4.475107194s" podCreationTimestamp="2026-03-19 16:50:15 +0000 UTC" firstStartedPulling="2026-03-19 16:50:16.871774501 +0000 UTC m=+580.017832041" lastFinishedPulling="2026-03-19 16:50:18.832483602 +0000 UTC m=+581.978541142" observedRunningTime="2026-03-19 16:50:19.466671572 +0000 UTC m=+582.612729112" watchObservedRunningTime="2026-03-19 16:50:19.475107194 +0000 UTC m=+582.621164734" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.436086 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f45c97787-txvtz"] Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.437091 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.448224 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f45c97787-txvtz"] Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.458937 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnrf9\" (UniqueName: \"kubernetes.io/projected/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-kube-api-access-rnrf9\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.458979 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-service-ca\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.458998 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-config\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.459038 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-oauth-serving-cert\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.459063 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-serving-cert\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.459088 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-trusted-ca-bundle\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.459103 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-oauth-config\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.463827 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zmnl5" event={"ID":"79624649-d591-41d1-ba82-9de4d21bc0fb","Type":"ContainerStarted","Data":"d642b7a8def903cdb118d14f1f4656bbac933f82460370434b54094dd4019b27"} Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.470801 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" event={"ID":"59572dd5-9b17-4cfa-bbb3-b7edc113d119","Type":"ContainerStarted","Data":"831bd0e17106460eb6c5439fdc9acb1c73f3a6db738630767850b52ae2c758dc"} Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.470894 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" event={"ID":"59572dd5-9b17-4cfa-bbb3-b7edc113d119","Type":"ContainerStarted","Data":"0447626bdfc0f1df4c00ddc6518981beda0526c7528babca10705adb25ec7141"} Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.481731 4792 generic.go:334] "Generic (PLEG): container finished" podID="9e6c044c-73fb-462b-a21d-127ed782c44f" containerID="3c0726c07f603e5c2a219b1ae58b8b7b0bd9e976ae58bcb4dcaf8c792f41114e" exitCode=0 Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.482032 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9e6c044c-73fb-462b-a21d-127ed782c44f","Type":"ContainerDied","Data":"3c0726c07f603e5c2a219b1ae58b8b7b0bd9e976ae58bcb4dcaf8c792f41114e"} Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.491125 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-zmnl5" podStartSLOduration=4.116194604 podStartE2EDuration="5.491106997s" podCreationTimestamp="2026-03-19 16:50:15 +0000 UTC" firstStartedPulling="2026-03-19 16:50:15.936682807 +0000 UTC m=+579.082740347" lastFinishedPulling="2026-03-19 16:50:17.3115952 +0000 UTC m=+580.457652740" observedRunningTime="2026-03-19 16:50:20.487492807 +0000 UTC m=+583.633550357" watchObservedRunningTime="2026-03-19 16:50:20.491106997 +0000 UTC m=+583.637164537" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.537131 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hc2bs" podStartSLOduration=3.6740826699999998 podStartE2EDuration="5.537112632s" podCreationTimestamp="2026-03-19 16:50:15 +0000 UTC" firstStartedPulling="2026-03-19 16:50:16.969854181 +0000 UTC m=+580.115911721" lastFinishedPulling="2026-03-19 16:50:18.832884143 +0000 UTC m=+581.978941683" observedRunningTime="2026-03-19 16:50:20.507069176 +0000 UTC m=+583.653126716" watchObservedRunningTime="2026-03-19 16:50:20.537112632 +0000 UTC m=+583.683170192" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.560288 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnrf9\" (UniqueName: \"kubernetes.io/projected/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-kube-api-access-rnrf9\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.560326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-service-ca\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.560348 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-config\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.560365 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-oauth-serving-cert\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.560382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-serving-cert\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.560403 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-oauth-config\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.560421 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-trusted-ca-bundle\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.561411 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-trusted-ca-bundle\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.562835 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-service-ca\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.563418 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-config\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.563924 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-oauth-serving-cert\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.568338 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-oauth-config\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.578319 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnrf9\" (UniqueName: \"kubernetes.io/projected/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-kube-api-access-rnrf9\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.588356 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-serving-cert\") pod \"console-7f45c97787-txvtz\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.753255 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.858691 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-856df7d6cf-zntpc"] Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.859395 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.863020 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.863288 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.863441 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.863597 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-zl4kc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.863697 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.865132 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3raf4b2pu359t" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.866251 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-metrics-server-audit-profiles\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.866293 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.866320 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-client-ca-bundle\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.866348 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-audit-log\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.866377 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-secret-metrics-client-certs\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.866395 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-secret-metrics-server-tls\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.866417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nrfb\" (UniqueName: \"kubernetes.io/projected/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-kube-api-access-4nrfb\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.871199 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-856df7d6cf-zntpc"] Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.967769 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-audit-log\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.968224 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-secret-metrics-client-certs\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.968246 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-secret-metrics-server-tls\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.968272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nrfb\" (UniqueName: \"kubernetes.io/projected/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-kube-api-access-4nrfb\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.968322 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-metrics-server-audit-profiles\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.968352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.968370 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-audit-log\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.968379 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-client-ca-bundle\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.969495 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.969797 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-metrics-server-audit-profiles\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.973602 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-secret-metrics-server-tls\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.977769 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-secret-metrics-client-certs\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.983436 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nrfb\" (UniqueName: \"kubernetes.io/projected/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-kube-api-access-4nrfb\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:20 crc kubenswrapper[4792]: I0319 16:50:20.987153 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70963d0d-d9ae-4a3c-a2c7-8e05a90cd337-client-ca-bundle\") pod \"metrics-server-856df7d6cf-zntpc\" (UID: \"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337\") " pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.191421 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f45c97787-txvtz"] Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.194114 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.368390 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-5748767799-dwqlm"] Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.369175 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.371104 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.371158 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.377936 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5748767799-dwqlm"] Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.473911 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/33bb9632-c429-4194-91fe-698d60a4933a-monitoring-plugin-cert\") pod \"monitoring-plugin-5748767799-dwqlm\" (UID: \"33bb9632-c429-4194-91fe-698d60a4933a\") " pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.575800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/33bb9632-c429-4194-91fe-698d60a4933a-monitoring-plugin-cert\") pod \"monitoring-plugin-5748767799-dwqlm\" (UID: \"33bb9632-c429-4194-91fe-698d60a4933a\") " pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.598127 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/33bb9632-c429-4194-91fe-698d60a4933a-monitoring-plugin-cert\") pod \"monitoring-plugin-5748767799-dwqlm\" (UID: \"33bb9632-c429-4194-91fe-698d60a4933a\") " pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.688917 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.869905 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.871671 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.874264 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.875035 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.875339 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.876033 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.876177 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.876277 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.876751 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.876864 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.876969 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-f2qko2ske10se" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.876969 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.877050 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-xcxjj" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879411 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/485f0802-7649-4377-99c0-22f04b2ee5bc-config-out\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-config\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879467 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879486 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879513 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879579 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879593 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879608 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879626 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hg24\" (UniqueName: \"kubernetes.io/projected/485f0802-7649-4377-99c0-22f04b2ee5bc-kube-api-access-7hg24\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879642 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879657 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-web-config\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879673 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879689 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/485f0802-7649-4377-99c0-22f04b2ee5bc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879704 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.879719 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.888411 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.890122 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.899254 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.980508 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.980556 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.980804 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.980874 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.980907 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.980938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.980968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.980994 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981020 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981070 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hg24\" (UniqueName: \"kubernetes.io/projected/485f0802-7649-4377-99c0-22f04b2ee5bc-kube-api-access-7hg24\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981106 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981128 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-web-config\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981160 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981188 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/485f0802-7649-4377-99c0-22f04b2ee5bc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981217 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981378 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/485f0802-7649-4377-99c0-22f04b2ee5bc-config-out\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981407 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-config\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.981826 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.982865 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.982911 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.983504 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.984211 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.986025 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.986342 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.986491 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.986551 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.986590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/485f0802-7649-4377-99c0-22f04b2ee5bc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.986650 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-config\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.986823 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.987217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.988589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.988777 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/485f0802-7649-4377-99c0-22f04b2ee5bc-config-out\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.991070 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/485f0802-7649-4377-99c0-22f04b2ee5bc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:21 crc kubenswrapper[4792]: I0319 16:50:21.999896 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/485f0802-7649-4377-99c0-22f04b2ee5bc-web-config\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:22 crc kubenswrapper[4792]: I0319 16:50:22.006786 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hg24\" (UniqueName: \"kubernetes.io/projected/485f0802-7649-4377-99c0-22f04b2ee5bc-kube-api-access-7hg24\") pod \"prometheus-k8s-0\" (UID: \"485f0802-7649-4377-99c0-22f04b2ee5bc\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:22 crc kubenswrapper[4792]: W0319 16:50:22.107577 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9d44223_f7ee_43ae_98f4_933c14d2a5d2.slice/crio-0c63f3f940913ce4cc1031614f9dec257a99935acab415eab8ebc734658c3e9b WatchSource:0}: Error finding container 0c63f3f940913ce4cc1031614f9dec257a99935acab415eab8ebc734658c3e9b: Status 404 returned error can't find the container with id 0c63f3f940913ce4cc1031614f9dec257a99935acab415eab8ebc734658c3e9b Mar 19 16:50:22 crc kubenswrapper[4792]: I0319 16:50:22.190369 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:22 crc kubenswrapper[4792]: I0319 16:50:22.494263 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f45c97787-txvtz" event={"ID":"a9d44223-f7ee-43ae-98f4-933c14d2a5d2","Type":"ContainerStarted","Data":"0c63f3f940913ce4cc1031614f9dec257a99935acab415eab8ebc734658c3e9b"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.029793 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.271330 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-856df7d6cf-zntpc"] Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.274383 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5748767799-dwqlm"] Mar 19 16:50:23 crc kubenswrapper[4792]: W0319 16:50:23.285190 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70963d0d_d9ae_4a3c_a2c7_8e05a90cd337.slice/crio-d56aaac3e4030d892d2eca0a152e6fe3f6d3b2b8200c5f898b327b993a8a18d2 WatchSource:0}: Error finding container d56aaac3e4030d892d2eca0a152e6fe3f6d3b2b8200c5f898b327b993a8a18d2: Status 404 returned error can't find the container with id d56aaac3e4030d892d2eca0a152e6fe3f6d3b2b8200c5f898b327b993a8a18d2 Mar 19 16:50:23 crc kubenswrapper[4792]: W0319 16:50:23.286159 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33bb9632_c429_4194_91fe_698d60a4933a.slice/crio-eb28415acbe51a654659c08517dd477b5a7a4c24dc166f28d055c0535b41e1ec WatchSource:0}: Error finding container eb28415acbe51a654659c08517dd477b5a7a4c24dc166f28d055c0535b41e1ec: Status 404 returned error can't find the container with id eb28415acbe51a654659c08517dd477b5a7a4c24dc166f28d055c0535b41e1ec Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.504431 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f45c97787-txvtz" event={"ID":"a9d44223-f7ee-43ae-98f4-933c14d2a5d2","Type":"ContainerStarted","Data":"b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.506240 4792 generic.go:334] "Generic (PLEG): container finished" podID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerID="e5023d267069a7b77bc667215f07984f78109e2972f044f96738e63d2454e121" exitCode=0 Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.506288 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"485f0802-7649-4377-99c0-22f04b2ee5bc","Type":"ContainerDied","Data":"e5023d267069a7b77bc667215f07984f78109e2972f044f96738e63d2454e121"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.506303 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"485f0802-7649-4377-99c0-22f04b2ee5bc","Type":"ContainerStarted","Data":"92769c9281193f3384100caecc2adb078621031c1ec8b7f78654a654bca6eb6d"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.507394 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" event={"ID":"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337","Type":"ContainerStarted","Data":"d56aaac3e4030d892d2eca0a152e6fe3f6d3b2b8200c5f898b327b993a8a18d2"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.508395 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" event={"ID":"33bb9632-c429-4194-91fe-698d60a4933a","Type":"ContainerStarted","Data":"eb28415acbe51a654659c08517dd477b5a7a4c24dc166f28d055c0535b41e1ec"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.511649 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9e6c044c-73fb-462b-a21d-127ed782c44f","Type":"ContainerStarted","Data":"ed35997ef3a570e8e748221bd4d64705ca170a085b33a4219bf31e91fdd2e237"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.511674 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9e6c044c-73fb-462b-a21d-127ed782c44f","Type":"ContainerStarted","Data":"95d719a4226e5ef29abfe13f583661307a06eda06760dcb14c6e3525e8eb075b"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.511684 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9e6c044c-73fb-462b-a21d-127ed782c44f","Type":"ContainerStarted","Data":"70e0b482ec5136007a3ab7278db8788292e571ff52fb81b5a4888ce1c82e852a"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.511693 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9e6c044c-73fb-462b-a21d-127ed782c44f","Type":"ContainerStarted","Data":"f88a480fbb1b980dd3f934b68e20902940765582c1db4d95f9caab0ebb764d2c"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.513998 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" event={"ID":"e647560e-f7fe-4bb2-bf05-80a88cf1c66a","Type":"ContainerStarted","Data":"84f524c62d4371b74fd55122b3a2315a6a95e6488ac35494ec3f156d9562e014"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.514043 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" event={"ID":"e647560e-f7fe-4bb2-bf05-80a88cf1c66a","Type":"ContainerStarted","Data":"abe658da2a10abbe1d265ae2bcd3e72f9197b401ebd690c323e435a27bfb5de8"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.514052 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" event={"ID":"e647560e-f7fe-4bb2-bf05-80a88cf1c66a","Type":"ContainerStarted","Data":"ab5bdd0a3c77f1b30ba7def1817a682da05543df932fbc591052dbab6b69d0c6"} Mar 19 16:50:23 crc kubenswrapper[4792]: I0319 16:50:23.530220 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f45c97787-txvtz" podStartSLOduration=3.530203332 podStartE2EDuration="3.530203332s" podCreationTimestamp="2026-03-19 16:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:50:23.523909719 +0000 UTC m=+586.669967269" watchObservedRunningTime="2026-03-19 16:50:23.530203332 +0000 UTC m=+586.676260872" Mar 19 16:50:24 crc kubenswrapper[4792]: I0319 16:50:24.529241 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9e6c044c-73fb-462b-a21d-127ed782c44f","Type":"ContainerStarted","Data":"90b4d1a52a70debc7b96648f30d87bc72a152b147c8e687e32766b4089c97b62"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.545471 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" event={"ID":"33bb9632-c429-4194-91fe-698d60a4933a","Type":"ContainerStarted","Data":"7da15d6cb18f9966cad05261a856b799ad1c056800e11e0e335171ba9e2a737b"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.546105 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.551257 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9e6c044c-73fb-462b-a21d-127ed782c44f","Type":"ContainerStarted","Data":"9beb4f2ea164aae414289e55d82c21e5730248cd0bc2009b5ada80a975448a5e"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.554878 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" event={"ID":"e647560e-f7fe-4bb2-bf05-80a88cf1c66a","Type":"ContainerStarted","Data":"de7d607c2dd9181bd7c057e437fd30ad0ffd28a6aeff334fcd9ea3aae88f5bd0"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.554925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" event={"ID":"e647560e-f7fe-4bb2-bf05-80a88cf1c66a","Type":"ContainerStarted","Data":"bc2fae938625feb727638b5f758de897cfc5a924e6c814907ae34aae81695a76"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.554939 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" event={"ID":"e647560e-f7fe-4bb2-bf05-80a88cf1c66a","Type":"ContainerStarted","Data":"83037f85050eb81dac70c344ee0d068823061b1d1bd3dd9846a8489462f0bd3e"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.555045 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.557012 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.558588 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"485f0802-7649-4377-99c0-22f04b2ee5bc","Type":"ContainerStarted","Data":"f50440ac31c7256c4fcd84a94d1685ededcaba801f72a116b7cb73cd72dde0ef"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.558629 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"485f0802-7649-4377-99c0-22f04b2ee5bc","Type":"ContainerStarted","Data":"f9dd289aedd64ea899a3f420375546191113dd427589e209cea1b663714444b7"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.558647 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"485f0802-7649-4377-99c0-22f04b2ee5bc","Type":"ContainerStarted","Data":"b3edcda1fd567acc6e2221aa59166c6d93d6d1d7d781bb3f669d29efde8a18f9"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.560398 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" podStartSLOduration=2.91029515 podStartE2EDuration="6.560384689s" podCreationTimestamp="2026-03-19 16:50:21 +0000 UTC" firstStartedPulling="2026-03-19 16:50:23.289456844 +0000 UTC m=+586.435514374" lastFinishedPulling="2026-03-19 16:50:26.939546333 +0000 UTC m=+590.085603913" observedRunningTime="2026-03-19 16:50:27.557456897 +0000 UTC m=+590.703514437" watchObservedRunningTime="2026-03-19 16:50:27.560384689 +0000 UTC m=+590.706442229" Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.561515 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" event={"ID":"70963d0d-d9ae-4a3c-a2c7-8e05a90cd337","Type":"ContainerStarted","Data":"3da746b0a718a403d28e6c7db5e5b7b1399e78a3e22f02ba53d8011c32c833e5"} Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.590788 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.131180665 podStartE2EDuration="11.590770178s" podCreationTimestamp="2026-03-19 16:50:16 +0000 UTC" firstStartedPulling="2026-03-19 16:50:17.534304621 +0000 UTC m=+580.680362161" lastFinishedPulling="2026-03-19 16:50:26.993894134 +0000 UTC m=+590.139951674" observedRunningTime="2026-03-19 16:50:27.589007099 +0000 UTC m=+590.735064659" watchObservedRunningTime="2026-03-19 16:50:27.590770178 +0000 UTC m=+590.736827728" Mar 19 16:50:27 crc kubenswrapper[4792]: I0319 16:50:27.668599 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" podStartSLOduration=2.88762516 podStartE2EDuration="10.668581057s" podCreationTimestamp="2026-03-19 16:50:17 +0000 UTC" firstStartedPulling="2026-03-19 16:50:19.199358332 +0000 UTC m=+582.345415882" lastFinishedPulling="2026-03-19 16:50:26.980314239 +0000 UTC m=+590.126371779" observedRunningTime="2026-03-19 16:50:27.666254272 +0000 UTC m=+590.812311832" watchObservedRunningTime="2026-03-19 16:50:27.668581057 +0000 UTC m=+590.814638607" Mar 19 16:50:28 crc kubenswrapper[4792]: I0319 16:50:28.035245 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" Mar 19 16:50:28 crc kubenswrapper[4792]: I0319 16:50:28.058634 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" podStartSLOduration=4.372003889 podStartE2EDuration="8.058616817s" podCreationTimestamp="2026-03-19 16:50:20 +0000 UTC" firstStartedPulling="2026-03-19 16:50:23.287160721 +0000 UTC m=+586.433218261" lastFinishedPulling="2026-03-19 16:50:26.973773649 +0000 UTC m=+590.119831189" observedRunningTime="2026-03-19 16:50:27.698114922 +0000 UTC m=+590.844172462" watchObservedRunningTime="2026-03-19 16:50:28.058616817 +0000 UTC m=+591.204674357" Mar 19 16:50:28 crc kubenswrapper[4792]: I0319 16:50:28.573808 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"485f0802-7649-4377-99c0-22f04b2ee5bc","Type":"ContainerStarted","Data":"edf573f975dca318f6bf449253c39ad6004cd3fc8a0fea6986e5bf3d5093b69e"} Mar 19 16:50:28 crc kubenswrapper[4792]: I0319 16:50:28.573890 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"485f0802-7649-4377-99c0-22f04b2ee5bc","Type":"ContainerStarted","Data":"1204a4434fdc1d89e7d7922601d1aaf476a9a16861caf1879ed5792c3375f4ca"} Mar 19 16:50:28 crc kubenswrapper[4792]: I0319 16:50:28.573910 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"485f0802-7649-4377-99c0-22f04b2ee5bc","Type":"ContainerStarted","Data":"17ffcf08ce4cfdd4f6e41283393495358f65f38efcd806ba746b5943b37a7e10"} Mar 19 16:50:28 crc kubenswrapper[4792]: I0319 16:50:28.616217 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.133524441 podStartE2EDuration="7.616192476s" podCreationTimestamp="2026-03-19 16:50:21 +0000 UTC" firstStartedPulling="2026-03-19 16:50:23.507688881 +0000 UTC m=+586.653746421" lastFinishedPulling="2026-03-19 16:50:26.990356916 +0000 UTC m=+590.136414456" observedRunningTime="2026-03-19 16:50:28.611799173 +0000 UTC m=+591.757856713" watchObservedRunningTime="2026-03-19 16:50:28.616192476 +0000 UTC m=+591.762250026" Mar 19 16:50:30 crc kubenswrapper[4792]: I0319 16:50:30.753922 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:30 crc kubenswrapper[4792]: I0319 16:50:30.754192 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:30 crc kubenswrapper[4792]: I0319 16:50:30.763696 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:31 crc kubenswrapper[4792]: I0319 16:50:31.598786 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:50:31 crc kubenswrapper[4792]: I0319 16:50:31.665263 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q29n4"] Mar 19 16:50:32 crc kubenswrapper[4792]: I0319 16:50:32.191560 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:50:39 crc kubenswrapper[4792]: I0319 16:50:39.389915 4792 scope.go:117] "RemoveContainer" containerID="adad26fad0c9ffd603d8a730f225b94a613021e68583df3cd447d3f1170c9afe" Mar 19 16:50:41 crc kubenswrapper[4792]: I0319 16:50:41.195624 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:41 crc kubenswrapper[4792]: I0319 16:50:41.200442 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:50:50 crc kubenswrapper[4792]: I0319 16:50:50.231525 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:50:50 crc kubenswrapper[4792]: I0319 16:50:50.232762 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:50:56 crc kubenswrapper[4792]: I0319 16:50:56.711728 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-q29n4" podUID="d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" containerName="console" containerID="cri-o://ca95f3548e51b4e3b3d0fba0b9feb54aa9a767208d65bd1bab93684ae25543d0" gracePeriod=15 Mar 19 16:50:56 crc kubenswrapper[4792]: I0319 16:50:56.911053 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q29n4_d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809/console/0.log" Mar 19 16:50:56 crc kubenswrapper[4792]: I0319 16:50:56.911328 4792 generic.go:334] "Generic (PLEG): container finished" podID="d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" containerID="ca95f3548e51b4e3b3d0fba0b9feb54aa9a767208d65bd1bab93684ae25543d0" exitCode=2 Mar 19 16:50:56 crc kubenswrapper[4792]: I0319 16:50:56.911357 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q29n4" event={"ID":"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809","Type":"ContainerDied","Data":"ca95f3548e51b4e3b3d0fba0b9feb54aa9a767208d65bd1bab93684ae25543d0"} Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.145666 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q29n4_d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809/console/0.log" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.145733 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.182495 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp7m9\" (UniqueName: \"kubernetes.io/projected/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-kube-api-access-rp7m9\") pod \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.182583 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-service-ca\") pod \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.182608 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-serving-cert\") pod \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.182630 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-oauth-config\") pod \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.182647 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-oauth-serving-cert\") pod \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.182662 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-config\") pod \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.182702 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-trusted-ca-bundle\") pod \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\" (UID: \"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809\") " Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.183291 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" (UID: "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.183303 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" (UID: "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.183681 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-config" (OuterVolumeSpecName: "console-config") pod "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" (UID: "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.184245 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" (UID: "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.189174 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" (UID: "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.190981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" (UID: "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.191991 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-kube-api-access-rp7m9" (OuterVolumeSpecName: "kube-api-access-rp7m9") pod "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" (UID: "d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809"). InnerVolumeSpecName "kube-api-access-rp7m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.285036 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.285068 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.285085 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.285117 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.285127 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-console-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.285135 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.285143 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp7m9\" (UniqueName: \"kubernetes.io/projected/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809-kube-api-access-rp7m9\") on node \"crc\" DevicePath \"\"" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.930363 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q29n4_d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809/console/0.log" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.930642 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q29n4" event={"ID":"d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809","Type":"ContainerDied","Data":"8169800fd78f53913ae640b84357f02c39481da1393cbca3a0f6ad1fe7ff20c1"} Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.930675 4792 scope.go:117] "RemoveContainer" containerID="ca95f3548e51b4e3b3d0fba0b9feb54aa9a767208d65bd1bab93684ae25543d0" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.930760 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q29n4" Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.950257 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q29n4"] Mar 19 16:50:57 crc kubenswrapper[4792]: I0319 16:50:57.971404 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-q29n4"] Mar 19 16:50:59 crc kubenswrapper[4792]: I0319 16:50:59.746286 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" path="/var/lib/kubelet/pods/d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809/volumes" Mar 19 16:51:01 crc kubenswrapper[4792]: I0319 16:51:01.199954 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:51:01 crc kubenswrapper[4792]: I0319 16:51:01.203925 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" Mar 19 16:51:20 crc kubenswrapper[4792]: I0319 16:51:20.231345 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:51:20 crc kubenswrapper[4792]: I0319 16:51:20.231940 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:51:22 crc kubenswrapper[4792]: I0319 16:51:22.191696 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:51:22 crc kubenswrapper[4792]: I0319 16:51:22.226144 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:51:23 crc kubenswrapper[4792]: I0319 16:51:23.153640 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.803761 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c589d8dc4-d7wtg"] Mar 19 16:51:40 crc kubenswrapper[4792]: E0319 16:51:40.805956 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" containerName="console" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.806076 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" containerName="console" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.806330 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7afb4cd-3e0e-4d1d-8b2e-c3f4e2fe2809" containerName="console" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.807026 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.818999 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c589d8dc4-d7wtg"] Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.937667 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5x2\" (UniqueName: \"kubernetes.io/projected/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-kube-api-access-dk5x2\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.938045 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-oauth-serving-cert\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.938182 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-trusted-ca-bundle\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.938274 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-serving-cert\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.938374 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-oauth-config\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.938455 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-service-ca\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:40 crc kubenswrapper[4792]: I0319 16:51:40.938536 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-config\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.039977 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-trusted-ca-bundle\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.040227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-serving-cert\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.040319 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-oauth-config\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.040401 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-service-ca\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.040478 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-config\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.040605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5x2\" (UniqueName: \"kubernetes.io/projected/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-kube-api-access-dk5x2\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.040722 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-oauth-serving-cert\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.041398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-service-ca\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.041463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-config\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.041765 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-trusted-ca-bundle\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.042149 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-oauth-serving-cert\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.046098 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-oauth-config\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.046683 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-serving-cert\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.057721 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5x2\" (UniqueName: \"kubernetes.io/projected/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-kube-api-access-dk5x2\") pod \"console-7c589d8dc4-d7wtg\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.123188 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:41 crc kubenswrapper[4792]: I0319 16:51:41.549629 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c589d8dc4-d7wtg"] Mar 19 16:51:42 crc kubenswrapper[4792]: I0319 16:51:42.243491 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c589d8dc4-d7wtg" event={"ID":"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65","Type":"ContainerStarted","Data":"77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b"} Mar 19 16:51:42 crc kubenswrapper[4792]: I0319 16:51:42.243822 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c589d8dc4-d7wtg" event={"ID":"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65","Type":"ContainerStarted","Data":"76e8779f8e5577118fdc0d90a8a5652be07103c5a6079089d7406303db9d715c"} Mar 19 16:51:42 crc kubenswrapper[4792]: I0319 16:51:42.273954 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c589d8dc4-d7wtg" podStartSLOduration=2.273923878 podStartE2EDuration="2.273923878s" podCreationTimestamp="2026-03-19 16:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:51:42.265606699 +0000 UTC m=+665.411664279" watchObservedRunningTime="2026-03-19 16:51:42.273923878 +0000 UTC m=+665.419981478" Mar 19 16:51:50 crc kubenswrapper[4792]: I0319 16:51:50.230805 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:51:50 crc kubenswrapper[4792]: I0319 16:51:50.231227 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:51:50 crc kubenswrapper[4792]: I0319 16:51:50.231284 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:51:50 crc kubenswrapper[4792]: I0319 16:51:50.231902 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b47f858f0b64f2da0774e4353d257362e15551e1c4b2ea1e77e5a1d5a1fb4edb"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 16:51:50 crc kubenswrapper[4792]: I0319 16:51:50.231953 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://b47f858f0b64f2da0774e4353d257362e15551e1c4b2ea1e77e5a1d5a1fb4edb" gracePeriod=600 Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.123913 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.124520 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.133299 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.314207 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="b47f858f0b64f2da0774e4353d257362e15551e1c4b2ea1e77e5a1d5a1fb4edb" exitCode=0 Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.314285 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"b47f858f0b64f2da0774e4353d257362e15551e1c4b2ea1e77e5a1d5a1fb4edb"} Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.315573 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"8fec416c9bc9f932f648a25ada539f17bfee109f18ef5d78432b6c269a1dd821"} Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.315602 4792 scope.go:117] "RemoveContainer" containerID="c44ae9d61ca8c53f504eaf0d9805dc6eed17635a96b271ff98bf7bf2821e64ef" Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.323816 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 16:51:51 crc kubenswrapper[4792]: I0319 16:51:51.403345 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f45c97787-txvtz"] Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.135497 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565652-xrx8j"] Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.136664 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565652-xrx8j" Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.138830 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.142149 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.142270 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.148350 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565652-xrx8j"] Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.327586 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvh4r\" (UniqueName: \"kubernetes.io/projected/a1eb80a9-4b3a-4977-bb3b-8649c1d7660d-kube-api-access-cvh4r\") pod \"auto-csr-approver-29565652-xrx8j\" (UID: \"a1eb80a9-4b3a-4977-bb3b-8649c1d7660d\") " pod="openshift-infra/auto-csr-approver-29565652-xrx8j" Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.428915 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvh4r\" (UniqueName: \"kubernetes.io/projected/a1eb80a9-4b3a-4977-bb3b-8649c1d7660d-kube-api-access-cvh4r\") pod \"auto-csr-approver-29565652-xrx8j\" (UID: \"a1eb80a9-4b3a-4977-bb3b-8649c1d7660d\") " pod="openshift-infra/auto-csr-approver-29565652-xrx8j" Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.446737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvh4r\" (UniqueName: \"kubernetes.io/projected/a1eb80a9-4b3a-4977-bb3b-8649c1d7660d-kube-api-access-cvh4r\") pod \"auto-csr-approver-29565652-xrx8j\" (UID: \"a1eb80a9-4b3a-4977-bb3b-8649c1d7660d\") " pod="openshift-infra/auto-csr-approver-29565652-xrx8j" Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.462710 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565652-xrx8j" Mar 19 16:52:00 crc kubenswrapper[4792]: I0319 16:52:00.664002 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565652-xrx8j"] Mar 19 16:52:01 crc kubenswrapper[4792]: I0319 16:52:01.389832 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565652-xrx8j" event={"ID":"a1eb80a9-4b3a-4977-bb3b-8649c1d7660d","Type":"ContainerStarted","Data":"5c1c76e441091bd25a476e60cbadcac3a90e929ee5eb13be5281f69c5bc873c4"} Mar 19 16:52:02 crc kubenswrapper[4792]: I0319 16:52:02.398814 4792 generic.go:334] "Generic (PLEG): container finished" podID="a1eb80a9-4b3a-4977-bb3b-8649c1d7660d" containerID="7fdd279a395cc0f66290c42b86acc34f221e06435747a85e7cac1e106212bc19" exitCode=0 Mar 19 16:52:02 crc kubenswrapper[4792]: I0319 16:52:02.398916 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565652-xrx8j" event={"ID":"a1eb80a9-4b3a-4977-bb3b-8649c1d7660d","Type":"ContainerDied","Data":"7fdd279a395cc0f66290c42b86acc34f221e06435747a85e7cac1e106212bc19"} Mar 19 16:52:03 crc kubenswrapper[4792]: I0319 16:52:03.699076 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565652-xrx8j" Mar 19 16:52:03 crc kubenswrapper[4792]: I0319 16:52:03.770956 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvh4r\" (UniqueName: \"kubernetes.io/projected/a1eb80a9-4b3a-4977-bb3b-8649c1d7660d-kube-api-access-cvh4r\") pod \"a1eb80a9-4b3a-4977-bb3b-8649c1d7660d\" (UID: \"a1eb80a9-4b3a-4977-bb3b-8649c1d7660d\") " Mar 19 16:52:03 crc kubenswrapper[4792]: I0319 16:52:03.776508 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1eb80a9-4b3a-4977-bb3b-8649c1d7660d-kube-api-access-cvh4r" (OuterVolumeSpecName: "kube-api-access-cvh4r") pod "a1eb80a9-4b3a-4977-bb3b-8649c1d7660d" (UID: "a1eb80a9-4b3a-4977-bb3b-8649c1d7660d"). InnerVolumeSpecName "kube-api-access-cvh4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:52:03 crc kubenswrapper[4792]: I0319 16:52:03.874212 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvh4r\" (UniqueName: \"kubernetes.io/projected/a1eb80a9-4b3a-4977-bb3b-8649c1d7660d-kube-api-access-cvh4r\") on node \"crc\" DevicePath \"\"" Mar 19 16:52:04 crc kubenswrapper[4792]: I0319 16:52:04.414636 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565652-xrx8j" event={"ID":"a1eb80a9-4b3a-4977-bb3b-8649c1d7660d","Type":"ContainerDied","Data":"5c1c76e441091bd25a476e60cbadcac3a90e929ee5eb13be5281f69c5bc873c4"} Mar 19 16:52:04 crc kubenswrapper[4792]: I0319 16:52:04.414710 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1c76e441091bd25a476e60cbadcac3a90e929ee5eb13be5281f69c5bc873c4" Mar 19 16:52:04 crc kubenswrapper[4792]: I0319 16:52:04.414818 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565652-xrx8j" Mar 19 16:52:04 crc kubenswrapper[4792]: I0319 16:52:04.759106 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565646-v4pgg"] Mar 19 16:52:04 crc kubenswrapper[4792]: I0319 16:52:04.763142 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565646-v4pgg"] Mar 19 16:52:05 crc kubenswrapper[4792]: I0319 16:52:05.757357 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b93baee-7a36-4e2a-9538-9e3663a1b1ab" path="/var/lib/kubelet/pods/5b93baee-7a36-4e2a-9538-9e3663a1b1ab/volumes" Mar 19 16:52:12 crc kubenswrapper[4792]: I0319 16:52:12.421973 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 16:52:12 crc kubenswrapper[4792]: I0319 16:52:12.422511 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.498405 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7f45c97787-txvtz" podUID="a9d44223-f7ee-43ae-98f4-933c14d2a5d2" containerName="console" containerID="cri-o://b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c" gracePeriod=15 Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.911548 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f45c97787-txvtz_a9d44223-f7ee-43ae-98f4-933c14d2a5d2/console/0.log" Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.911915 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.960069 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f45c97787-txvtz_a9d44223-f7ee-43ae-98f4-933c14d2a5d2/console/0.log" Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.960122 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9d44223-f7ee-43ae-98f4-933c14d2a5d2" containerID="b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c" exitCode=2 Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.960156 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f45c97787-txvtz" event={"ID":"a9d44223-f7ee-43ae-98f4-933c14d2a5d2","Type":"ContainerDied","Data":"b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c"} Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.960184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f45c97787-txvtz" event={"ID":"a9d44223-f7ee-43ae-98f4-933c14d2a5d2","Type":"ContainerDied","Data":"0c63f3f940913ce4cc1031614f9dec257a99935acab415eab8ebc734658c3e9b"} Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.960203 4792 scope.go:117] "RemoveContainer" containerID="b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c" Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.960322 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f45c97787-txvtz" Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.984500 4792 scope.go:117] "RemoveContainer" containerID="b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c" Mar 19 16:52:16 crc kubenswrapper[4792]: E0319 16:52:16.985016 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c\": container with ID starting with b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c not found: ID does not exist" containerID="b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c" Mar 19 16:52:16 crc kubenswrapper[4792]: I0319 16:52:16.985049 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c"} err="failed to get container status \"b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c\": rpc error: code = NotFound desc = could not find container \"b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c\": container with ID starting with b62eff72b6b4e87c973fc921c1f0fbd9ca9673a2d6d15b62cc4249f86e23545c not found: ID does not exist" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.051175 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-serving-cert\") pod \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.051238 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-oauth-config\") pod \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.051316 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-config\") pod \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.051391 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnrf9\" (UniqueName: \"kubernetes.io/projected/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-kube-api-access-rnrf9\") pod \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.051438 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-service-ca\") pod \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.051487 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-trusted-ca-bundle\") pod \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.051560 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-oauth-serving-cert\") pod \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\" (UID: \"a9d44223-f7ee-43ae-98f4-933c14d2a5d2\") " Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.052164 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-service-ca" (OuterVolumeSpecName: "service-ca") pod "a9d44223-f7ee-43ae-98f4-933c14d2a5d2" (UID: "a9d44223-f7ee-43ae-98f4-933c14d2a5d2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.052209 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-config" (OuterVolumeSpecName: "console-config") pod "a9d44223-f7ee-43ae-98f4-933c14d2a5d2" (UID: "a9d44223-f7ee-43ae-98f4-933c14d2a5d2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.052231 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a9d44223-f7ee-43ae-98f4-933c14d2a5d2" (UID: "a9d44223-f7ee-43ae-98f4-933c14d2a5d2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.052294 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a9d44223-f7ee-43ae-98f4-933c14d2a5d2" (UID: "a9d44223-f7ee-43ae-98f4-933c14d2a5d2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.056269 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-kube-api-access-rnrf9" (OuterVolumeSpecName: "kube-api-access-rnrf9") pod "a9d44223-f7ee-43ae-98f4-933c14d2a5d2" (UID: "a9d44223-f7ee-43ae-98f4-933c14d2a5d2"). InnerVolumeSpecName "kube-api-access-rnrf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.056448 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a9d44223-f7ee-43ae-98f4-933c14d2a5d2" (UID: "a9d44223-f7ee-43ae-98f4-933c14d2a5d2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.056981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a9d44223-f7ee-43ae-98f4-933c14d2a5d2" (UID: "a9d44223-f7ee-43ae-98f4-933c14d2a5d2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.153083 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.153148 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnrf9\" (UniqueName: \"kubernetes.io/projected/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-kube-api-access-rnrf9\") on node \"crc\" DevicePath \"\"" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.153173 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.153191 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.153208 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.153225 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.153242 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d44223-f7ee-43ae-98f4-933c14d2a5d2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.293230 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f45c97787-txvtz"] Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.298478 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7f45c97787-txvtz"] Mar 19 16:52:17 crc kubenswrapper[4792]: I0319 16:52:17.748346 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d44223-f7ee-43ae-98f4-933c14d2a5d2" path="/var/lib/kubelet/pods/a9d44223-f7ee-43ae-98f4-933c14d2a5d2/volumes" Mar 19 16:53:39 crc kubenswrapper[4792]: I0319 16:53:39.498662 4792 scope.go:117] "RemoveContainer" containerID="b32560c21d8ad22ed1f526205d102d3286738e6cec6de4a478d825f9b5eb71c7" Mar 19 16:53:50 crc kubenswrapper[4792]: I0319 16:53:50.231603 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:53:50 crc kubenswrapper[4792]: I0319 16:53:50.232989 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.142634 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565654-769pd"] Mar 19 16:54:00 crc kubenswrapper[4792]: E0319 16:54:00.143490 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1eb80a9-4b3a-4977-bb3b-8649c1d7660d" containerName="oc" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.143508 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1eb80a9-4b3a-4977-bb3b-8649c1d7660d" containerName="oc" Mar 19 16:54:00 crc kubenswrapper[4792]: E0319 16:54:00.143531 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d44223-f7ee-43ae-98f4-933c14d2a5d2" containerName="console" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.143540 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d44223-f7ee-43ae-98f4-933c14d2a5d2" containerName="console" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.143662 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d44223-f7ee-43ae-98f4-933c14d2a5d2" containerName="console" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.143677 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1eb80a9-4b3a-4977-bb3b-8649c1d7660d" containerName="oc" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.144152 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565654-769pd" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.146396 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.188053 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.188176 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.194353 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565654-769pd"] Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.286663 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj7vh\" (UniqueName: \"kubernetes.io/projected/15038476-e48d-4bb9-b67f-928eb93e7c18-kube-api-access-qj7vh\") pod \"auto-csr-approver-29565654-769pd\" (UID: \"15038476-e48d-4bb9-b67f-928eb93e7c18\") " pod="openshift-infra/auto-csr-approver-29565654-769pd" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.388456 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj7vh\" (UniqueName: \"kubernetes.io/projected/15038476-e48d-4bb9-b67f-928eb93e7c18-kube-api-access-qj7vh\") pod \"auto-csr-approver-29565654-769pd\" (UID: \"15038476-e48d-4bb9-b67f-928eb93e7c18\") " pod="openshift-infra/auto-csr-approver-29565654-769pd" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.406933 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj7vh\" (UniqueName: \"kubernetes.io/projected/15038476-e48d-4bb9-b67f-928eb93e7c18-kube-api-access-qj7vh\") pod \"auto-csr-approver-29565654-769pd\" (UID: \"15038476-e48d-4bb9-b67f-928eb93e7c18\") " pod="openshift-infra/auto-csr-approver-29565654-769pd" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.505378 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565654-769pd" Mar 19 16:54:00 crc kubenswrapper[4792]: I0319 16:54:00.726707 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565654-769pd"] Mar 19 16:54:01 crc kubenswrapper[4792]: I0319 16:54:01.669333 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565654-769pd" event={"ID":"15038476-e48d-4bb9-b67f-928eb93e7c18","Type":"ContainerStarted","Data":"e33b2ed5fa1521faf305dc7ffec289d4980eacbe513a7ebd1c04b23be76e197e"} Mar 19 16:54:02 crc kubenswrapper[4792]: I0319 16:54:02.676546 4792 generic.go:334] "Generic (PLEG): container finished" podID="15038476-e48d-4bb9-b67f-928eb93e7c18" containerID="80b82ade50533f6bd36db175c7aa9476ab9558021e27cf69c36e04034331fb71" exitCode=0 Mar 19 16:54:02 crc kubenswrapper[4792]: I0319 16:54:02.676603 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565654-769pd" event={"ID":"15038476-e48d-4bb9-b67f-928eb93e7c18","Type":"ContainerDied","Data":"80b82ade50533f6bd36db175c7aa9476ab9558021e27cf69c36e04034331fb71"} Mar 19 16:54:03 crc kubenswrapper[4792]: I0319 16:54:03.926919 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565654-769pd" Mar 19 16:54:04 crc kubenswrapper[4792]: I0319 16:54:04.042961 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj7vh\" (UniqueName: \"kubernetes.io/projected/15038476-e48d-4bb9-b67f-928eb93e7c18-kube-api-access-qj7vh\") pod \"15038476-e48d-4bb9-b67f-928eb93e7c18\" (UID: \"15038476-e48d-4bb9-b67f-928eb93e7c18\") " Mar 19 16:54:04 crc kubenswrapper[4792]: I0319 16:54:04.052076 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15038476-e48d-4bb9-b67f-928eb93e7c18-kube-api-access-qj7vh" (OuterVolumeSpecName: "kube-api-access-qj7vh") pod "15038476-e48d-4bb9-b67f-928eb93e7c18" (UID: "15038476-e48d-4bb9-b67f-928eb93e7c18"). InnerVolumeSpecName "kube-api-access-qj7vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:54:04 crc kubenswrapper[4792]: I0319 16:54:04.144859 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj7vh\" (UniqueName: \"kubernetes.io/projected/15038476-e48d-4bb9-b67f-928eb93e7c18-kube-api-access-qj7vh\") on node \"crc\" DevicePath \"\"" Mar 19 16:54:04 crc kubenswrapper[4792]: I0319 16:54:04.691957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565654-769pd" event={"ID":"15038476-e48d-4bb9-b67f-928eb93e7c18","Type":"ContainerDied","Data":"e33b2ed5fa1521faf305dc7ffec289d4980eacbe513a7ebd1c04b23be76e197e"} Mar 19 16:54:04 crc kubenswrapper[4792]: I0319 16:54:04.692002 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e33b2ed5fa1521faf305dc7ffec289d4980eacbe513a7ebd1c04b23be76e197e" Mar 19 16:54:04 crc kubenswrapper[4792]: I0319 16:54:04.692014 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565654-769pd" Mar 19 16:54:04 crc kubenswrapper[4792]: I0319 16:54:04.976863 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565648-v84mv"] Mar 19 16:54:04 crc kubenswrapper[4792]: I0319 16:54:04.981036 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565648-v84mv"] Mar 19 16:54:05 crc kubenswrapper[4792]: I0319 16:54:05.747910 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9c6bb2-5fe1-41f6-bd92-f274417cbe62" path="/var/lib/kubelet/pods/1f9c6bb2-5fe1-41f6-bd92-f274417cbe62/volumes" Mar 19 16:54:20 crc kubenswrapper[4792]: I0319 16:54:20.231348 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:54:20 crc kubenswrapper[4792]: I0319 16:54:20.232989 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:54:39 crc kubenswrapper[4792]: I0319 16:54:39.553572 4792 scope.go:117] "RemoveContainer" containerID="d29c6d45e264553fe70a8bf0fe437bb6e533fab99caed59de79a2f29296e163b" Mar 19 16:54:40 crc kubenswrapper[4792]: I0319 16:54:40.317241 4792 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 16:54:50 crc kubenswrapper[4792]: I0319 16:54:50.231240 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:54:50 crc kubenswrapper[4792]: I0319 16:54:50.232019 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:54:50 crc kubenswrapper[4792]: I0319 16:54:50.232109 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:54:50 crc kubenswrapper[4792]: I0319 16:54:50.232923 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fec416c9bc9f932f648a25ada539f17bfee109f18ef5d78432b6c269a1dd821"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 16:54:50 crc kubenswrapper[4792]: I0319 16:54:50.233034 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://8fec416c9bc9f932f648a25ada539f17bfee109f18ef5d78432b6c269a1dd821" gracePeriod=600 Mar 19 16:54:51 crc kubenswrapper[4792]: I0319 16:54:51.021136 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="8fec416c9bc9f932f648a25ada539f17bfee109f18ef5d78432b6c269a1dd821" exitCode=0 Mar 19 16:54:51 crc kubenswrapper[4792]: I0319 16:54:51.021178 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"8fec416c9bc9f932f648a25ada539f17bfee109f18ef5d78432b6c269a1dd821"} Mar 19 16:54:51 crc kubenswrapper[4792]: I0319 16:54:51.021517 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"cccedd4b3574c81c38a56f329e598dc97a6d03867a548dcb7438ac401ae1edcb"} Mar 19 16:54:51 crc kubenswrapper[4792]: I0319 16:54:51.021544 4792 scope.go:117] "RemoveContainer" containerID="b47f858f0b64f2da0774e4353d257362e15551e1c4b2ea1e77e5a1d5a1fb4edb" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.045536 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8"] Mar 19 16:55:37 crc kubenswrapper[4792]: E0319 16:55:37.046325 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15038476-e48d-4bb9-b67f-928eb93e7c18" containerName="oc" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.046354 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="15038476-e48d-4bb9-b67f-928eb93e7c18" containerName="oc" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.046520 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="15038476-e48d-4bb9-b67f-928eb93e7c18" containerName="oc" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.047562 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.049161 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.056165 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8"] Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.149362 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.149688 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.149761 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrkkv\" (UniqueName: \"kubernetes.io/projected/d9eac154-f601-45a7-9d86-07e01fe01bf1-kube-api-access-rrkkv\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.250808 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.250914 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrkkv\" (UniqueName: \"kubernetes.io/projected/d9eac154-f601-45a7-9d86-07e01fe01bf1-kube-api-access-rrkkv\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.251000 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.251517 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.251589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.268670 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrkkv\" (UniqueName: \"kubernetes.io/projected/d9eac154-f601-45a7-9d86-07e01fe01bf1-kube-api-access-rrkkv\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.363755 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:37 crc kubenswrapper[4792]: I0319 16:55:37.585315 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8"] Mar 19 16:55:38 crc kubenswrapper[4792]: I0319 16:55:38.333132 4792 generic.go:334] "Generic (PLEG): container finished" podID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerID="2b325be51b3f03dc15e08a3f98ce50787913ca7f57a5cad75c4092ad265c05e3" exitCode=0 Mar 19 16:55:38 crc kubenswrapper[4792]: I0319 16:55:38.333183 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" event={"ID":"d9eac154-f601-45a7-9d86-07e01fe01bf1","Type":"ContainerDied","Data":"2b325be51b3f03dc15e08a3f98ce50787913ca7f57a5cad75c4092ad265c05e3"} Mar 19 16:55:38 crc kubenswrapper[4792]: I0319 16:55:38.333482 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" event={"ID":"d9eac154-f601-45a7-9d86-07e01fe01bf1","Type":"ContainerStarted","Data":"da80735589bec9d7e36f937bd337771607ae058c0ff25a6ff1da46161f9df4a6"} Mar 19 16:55:38 crc kubenswrapper[4792]: I0319 16:55:38.336105 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.406650 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v7lxx"] Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.407806 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.423119 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v7lxx"] Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.484734 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-utilities\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.484822 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2b6\" (UniqueName: \"kubernetes.io/projected/d1873bdc-0966-4413-88a8-95d1e1156839-kube-api-access-vn2b6\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.484950 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-catalog-content\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.586343 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-utilities\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.586482 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn2b6\" (UniqueName: \"kubernetes.io/projected/d1873bdc-0966-4413-88a8-95d1e1156839-kube-api-access-vn2b6\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.586532 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-catalog-content\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.586930 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-utilities\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.586992 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-catalog-content\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.604134 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn2b6\" (UniqueName: \"kubernetes.io/projected/d1873bdc-0966-4413-88a8-95d1e1156839-kube-api-access-vn2b6\") pod \"redhat-operators-v7lxx\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.727758 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:39 crc kubenswrapper[4792]: I0319 16:55:39.918294 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v7lxx"] Mar 19 16:55:39 crc kubenswrapper[4792]: W0319 16:55:39.928237 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1873bdc_0966_4413_88a8_95d1e1156839.slice/crio-d09f47c36e957c90c179fb74d7f184b587234218c00b5c104fc1a2deb80d2bcc WatchSource:0}: Error finding container d09f47c36e957c90c179fb74d7f184b587234218c00b5c104fc1a2deb80d2bcc: Status 404 returned error can't find the container with id d09f47c36e957c90c179fb74d7f184b587234218c00b5c104fc1a2deb80d2bcc Mar 19 16:55:40 crc kubenswrapper[4792]: I0319 16:55:40.346157 4792 generic.go:334] "Generic (PLEG): container finished" podID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerID="dd023813bc239ad1a24d3566cf09fc96ee36321962496ba57ea317f518e1f214" exitCode=0 Mar 19 16:55:40 crc kubenswrapper[4792]: I0319 16:55:40.346259 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" event={"ID":"d9eac154-f601-45a7-9d86-07e01fe01bf1","Type":"ContainerDied","Data":"dd023813bc239ad1a24d3566cf09fc96ee36321962496ba57ea317f518e1f214"} Mar 19 16:55:40 crc kubenswrapper[4792]: I0319 16:55:40.348263 4792 generic.go:334] "Generic (PLEG): container finished" podID="d1873bdc-0966-4413-88a8-95d1e1156839" containerID="a1ecbb13beac8dc2642079549bf244fe303dd43b5f29d0714b02a2c9275e7f9a" exitCode=0 Mar 19 16:55:40 crc kubenswrapper[4792]: I0319 16:55:40.348311 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7lxx" event={"ID":"d1873bdc-0966-4413-88a8-95d1e1156839","Type":"ContainerDied","Data":"a1ecbb13beac8dc2642079549bf244fe303dd43b5f29d0714b02a2c9275e7f9a"} Mar 19 16:55:40 crc kubenswrapper[4792]: I0319 16:55:40.348340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7lxx" event={"ID":"d1873bdc-0966-4413-88a8-95d1e1156839","Type":"ContainerStarted","Data":"d09f47c36e957c90c179fb74d7f184b587234218c00b5c104fc1a2deb80d2bcc"} Mar 19 16:55:41 crc kubenswrapper[4792]: I0319 16:55:41.356638 4792 generic.go:334] "Generic (PLEG): container finished" podID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerID="67aeb449375eb719038c370f6bd13fd3fff87e6fbce0a0321b27048561206d1c" exitCode=0 Mar 19 16:55:41 crc kubenswrapper[4792]: I0319 16:55:41.356762 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" event={"ID":"d9eac154-f601-45a7-9d86-07e01fe01bf1","Type":"ContainerDied","Data":"67aeb449375eb719038c370f6bd13fd3fff87e6fbce0a0321b27048561206d1c"} Mar 19 16:55:41 crc kubenswrapper[4792]: I0319 16:55:41.359649 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7lxx" event={"ID":"d1873bdc-0966-4413-88a8-95d1e1156839","Type":"ContainerStarted","Data":"f25c4663aff383702411a76c2caf6f971a14c55205a9183bada1463664155635"} Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.368797 4792 generic.go:334] "Generic (PLEG): container finished" podID="d1873bdc-0966-4413-88a8-95d1e1156839" containerID="f25c4663aff383702411a76c2caf6f971a14c55205a9183bada1463664155635" exitCode=0 Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.368940 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7lxx" event={"ID":"d1873bdc-0966-4413-88a8-95d1e1156839","Type":"ContainerDied","Data":"f25c4663aff383702411a76c2caf6f971a14c55205a9183bada1463664155635"} Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.628934 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.727256 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-bundle\") pod \"d9eac154-f601-45a7-9d86-07e01fe01bf1\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.727332 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-util\") pod \"d9eac154-f601-45a7-9d86-07e01fe01bf1\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.727412 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrkkv\" (UniqueName: \"kubernetes.io/projected/d9eac154-f601-45a7-9d86-07e01fe01bf1-kube-api-access-rrkkv\") pod \"d9eac154-f601-45a7-9d86-07e01fe01bf1\" (UID: \"d9eac154-f601-45a7-9d86-07e01fe01bf1\") " Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.730574 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-bundle" (OuterVolumeSpecName: "bundle") pod "d9eac154-f601-45a7-9d86-07e01fe01bf1" (UID: "d9eac154-f601-45a7-9d86-07e01fe01bf1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.734118 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9eac154-f601-45a7-9d86-07e01fe01bf1-kube-api-access-rrkkv" (OuterVolumeSpecName: "kube-api-access-rrkkv") pod "d9eac154-f601-45a7-9d86-07e01fe01bf1" (UID: "d9eac154-f601-45a7-9d86-07e01fe01bf1"). InnerVolumeSpecName "kube-api-access-rrkkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.746582 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-util" (OuterVolumeSpecName: "util") pod "d9eac154-f601-45a7-9d86-07e01fe01bf1" (UID: "d9eac154-f601-45a7-9d86-07e01fe01bf1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.829733 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.830186 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9eac154-f601-45a7-9d86-07e01fe01bf1-util\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:42 crc kubenswrapper[4792]: I0319 16:55:42.830207 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrkkv\" (UniqueName: \"kubernetes.io/projected/d9eac154-f601-45a7-9d86-07e01fe01bf1-kube-api-access-rrkkv\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:43 crc kubenswrapper[4792]: I0319 16:55:43.376134 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" event={"ID":"d9eac154-f601-45a7-9d86-07e01fe01bf1","Type":"ContainerDied","Data":"da80735589bec9d7e36f937bd337771607ae058c0ff25a6ff1da46161f9df4a6"} Mar 19 16:55:43 crc kubenswrapper[4792]: I0319 16:55:43.376986 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da80735589bec9d7e36f937bd337771607ae058c0ff25a6ff1da46161f9df4a6" Mar 19 16:55:43 crc kubenswrapper[4792]: I0319 16:55:43.376185 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8" Mar 19 16:55:43 crc kubenswrapper[4792]: I0319 16:55:43.378144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7lxx" event={"ID":"d1873bdc-0966-4413-88a8-95d1e1156839","Type":"ContainerStarted","Data":"c153532e4243b3aa6ec4fb131bfe0f6fffb0f086fe91b842e97f99acf3297491"} Mar 19 16:55:43 crc kubenswrapper[4792]: I0319 16:55:43.402892 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v7lxx" podStartSLOduration=1.944762985 podStartE2EDuration="4.40287337s" podCreationTimestamp="2026-03-19 16:55:39 +0000 UTC" firstStartedPulling="2026-03-19 16:55:40.349288291 +0000 UTC m=+903.495345831" lastFinishedPulling="2026-03-19 16:55:42.807398666 +0000 UTC m=+905.953456216" observedRunningTime="2026-03-19 16:55:43.396960488 +0000 UTC m=+906.543018028" watchObservedRunningTime="2026-03-19 16:55:43.40287337 +0000 UTC m=+906.548930910" Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.276883 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5tgtj"] Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.277721 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovn-controller" containerID="cri-o://5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c" gracePeriod=30 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.277964 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="northd" containerID="cri-o://9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352" gracePeriod=30 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.278079 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a" gracePeriod=30 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.278132 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kube-rbac-proxy-node" containerID="cri-o://872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d" gracePeriod=30 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.278165 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovn-acl-logging" containerID="cri-o://6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41" gracePeriod=30 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.278210 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="sbdb" containerID="cri-o://1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a" gracePeriod=30 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.278253 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="nbdb" containerID="cri-o://e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db" gracePeriod=30 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.351342 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" containerID="cri-o://724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d" gracePeriod=30 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.431494 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/2.log" Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.433485 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/1.log" Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.433544 4792 generic.go:334] "Generic (PLEG): container finished" podID="c71152a8-67de-430c-a09b-1535ebc93a9a" containerID="b5e0d4ec4f9a1d5f231a3612390ec1ef817e343e78cf509fe505125639449d7a" exitCode=2 Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.433573 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vbvt5" event={"ID":"c71152a8-67de-430c-a09b-1535ebc93a9a","Type":"ContainerDied","Data":"b5e0d4ec4f9a1d5f231a3612390ec1ef817e343e78cf509fe505125639449d7a"} Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.433623 4792 scope.go:117] "RemoveContainer" containerID="7c260a2bae9655a4de6d48c00f0d3b39444c335ce412aec68e065fbf13806346" Mar 19 16:55:48 crc kubenswrapper[4792]: I0319 16:55:48.434171 4792 scope.go:117] "RemoveContainer" containerID="b5e0d4ec4f9a1d5f231a3612390ec1ef817e343e78cf509fe505125639449d7a" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.441005 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vbvt5_c71152a8-67de-430c-a09b-1535ebc93a9a/kube-multus/2.log" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.441502 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vbvt5" event={"ID":"c71152a8-67de-430c-a09b-1535ebc93a9a","Type":"ContainerStarted","Data":"21d63ca7519512806ada4b0224f2a31da226470da88f783568782f2a9e11ad70"} Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.443386 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovnkube-controller/3.log" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.445257 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovn-acl-logging/0.log" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.445682 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovn-controller/0.log" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.445982 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d" exitCode=0 Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446007 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a" exitCode=0 Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446015 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db" exitCode=0 Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446023 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352" exitCode=0 Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446030 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a" exitCode=0 Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446036 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41" exitCode=143 Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446042 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c" exitCode=143 Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446063 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d"} Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446088 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a"} Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446101 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db"} Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446111 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352"} Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446120 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a"} Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446125 4792 scope.go:117] "RemoveContainer" containerID="d3670eca80c28d33c905cf82de812b0d06ff5e8811ac23f1fb1b197847fb6edb" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446128 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41"} Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.446273 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c"} Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.554388 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovn-acl-logging/0.log" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.554809 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovn-controller/0.log" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.555152 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621142 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xht8m"] Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621377 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovn-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621388 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovn-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621400 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621406 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621416 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kube-rbac-proxy-node" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621422 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kube-rbac-proxy-node" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621432 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kubecfg-setup" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621438 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kubecfg-setup" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621444 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621450 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621459 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="sbdb" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621465 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="sbdb" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621474 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerName="pull" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621479 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerName="pull" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621490 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerName="extract" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621495 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerName="extract" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621502 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovn-acl-logging" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621508 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovn-acl-logging" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621515 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kube-rbac-proxy-ovn-metrics" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621521 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kube-rbac-proxy-ovn-metrics" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621529 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerName="util" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621534 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerName="util" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621540 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621545 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621555 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="nbdb" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621560 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="nbdb" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621567 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621573 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621583 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="northd" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621590 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="northd" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621698 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="nbdb" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621709 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kube-rbac-proxy-node" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621720 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621727 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621734 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="northd" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621746 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621752 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9eac154-f601-45a7-9d86-07e01fe01bf1" containerName="extract" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621759 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="sbdb" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621765 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovn-acl-logging" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621774 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovn-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621784 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="kube-rbac-proxy-ovn-metrics" Mar 19 16:55:49 crc kubenswrapper[4792]: E0319 16:55:49.621934 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.621944 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.622053 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.622070 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerName="ovnkube-controller" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.624318 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.625766 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-kubelet\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.625855 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.625932 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-netd\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.625954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-ovn-kubernetes\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.625970 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-bin\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.625983 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-var-lib-openvswitch\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626007 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-node-log\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626034 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-script-lib\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626055 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-ovn\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626074 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-config\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626093 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-systemd-units\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626107 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626130 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9w4l\" (UniqueName: \"kubernetes.io/projected/8705e1c9-d503-400f-93b0-b04ce7083d7a-kube-api-access-n9w4l\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626150 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-openvswitch\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626167 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovn-node-metrics-cert\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-run-netns\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626265 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-ovn\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626281 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovnkube-config\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626314 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-log-socket\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626333 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98g5\" (UniqueName: \"kubernetes.io/projected/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-kube-api-access-q98g5\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626363 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-node-log\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-systemd\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626406 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-etc-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626461 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-systemd-units\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovnkube-script-lib\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-cni-netd\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626522 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626542 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-cni-bin\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626560 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-kubelet\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626574 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-slash\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626590 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-env-overrides\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626640 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-var-lib-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626657 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovn-node-metrics-cert\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626705 4792 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626736 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626755 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626771 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626787 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.626803 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-node-log" (OuterVolumeSpecName: "node-log") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.627159 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.627193 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.627415 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.627444 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.627464 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.627672 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.633030 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.634014 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8705e1c9-d503-400f-93b0-b04ce7083d7a-kube-api-access-n9w4l" (OuterVolumeSpecName: "kube-api-access-n9w4l") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "kube-api-access-n9w4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727372 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-slash\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727434 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-log-socket\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727448 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-systemd\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727477 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-netns\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727505 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-env-overrides\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727498 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-slash" (OuterVolumeSpecName: "host-slash") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727523 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-etc-openvswitch\") pod \"8705e1c9-d503-400f-93b0-b04ce7083d7a\" (UID: \"8705e1c9-d503-400f-93b0-b04ce7083d7a\") " Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727547 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-log-socket" (OuterVolumeSpecName: "log-socket") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727591 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727673 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727686 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-log-socket\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q98g5\" (UniqueName: \"kubernetes.io/projected/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-kube-api-access-q98g5\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727744 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-node-log\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727766 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-systemd\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727781 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-log-socket\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727797 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727809 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727818 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-etc-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727813 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-node-log\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-systemd\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-systemd-units\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727888 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727892 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovnkube-script-lib\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727872 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-etc-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727933 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-systemd-units\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727934 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.727953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-cni-netd\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728017 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-cni-netd\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728051 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728072 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728108 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-cni-bin\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728136 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728144 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-kubelet\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728172 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-kubelet\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-slash\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728200 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-cni-bin\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728203 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-env-overrides\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728256 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-var-lib-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovn-node-metrics-cert\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728339 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-run-netns\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728359 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-var-lib-openvswitch\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728363 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-ovn\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728427 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-slash\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-run-ovn\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovnkube-config\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728497 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovnkube-script-lib\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728547 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-host-run-netns\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728611 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-env-overrides\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728657 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728673 4792 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728684 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728695 4792 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728707 4792 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728721 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9w4l\" (UniqueName: \"kubernetes.io/projected/8705e1c9-d503-400f-93b0-b04ce7083d7a-kube-api-access-n9w4l\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728732 4792 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728739 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8705e1c9-d503-400f-93b0-b04ce7083d7a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728748 4792 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-slash\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728756 4792 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-log-socket\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728764 4792 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728772 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8705e1c9-d503-400f-93b0-b04ce7083d7a-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728780 4792 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728788 4792 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728797 4792 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728804 4792 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728813 4792 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728821 4792 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-node-log\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.728929 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovnkube-config\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.742291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-ovn-node-metrics-cert\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.756362 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8705e1c9-d503-400f-93b0-b04ce7083d7a" (UID: "8705e1c9-d503-400f-93b0-b04ce7083d7a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.761408 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q98g5\" (UniqueName: \"kubernetes.io/projected/c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a-kube-api-access-q98g5\") pod \"ovnkube-node-xht8m\" (UID: \"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.832686 4792 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8705e1c9-d503-400f-93b0-b04ce7083d7a-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 19 16:55:49 crc kubenswrapper[4792]: I0319 16:55:49.956066 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:49 crc kubenswrapper[4792]: W0319 16:55:49.974983 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d928ec_dfc7_4f20_99a4_1c89a0eb0c2a.slice/crio-ba68ba903750f70fe6eef89f9d53413b9dd93afb9eec73968a509e13082337dc WatchSource:0}: Error finding container ba68ba903750f70fe6eef89f9d53413b9dd93afb9eec73968a509e13082337dc: Status 404 returned error can't find the container with id ba68ba903750f70fe6eef89f9d53413b9dd93afb9eec73968a509e13082337dc Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.456464 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovn-acl-logging/0.log" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.457026 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5tgtj_8705e1c9-d503-400f-93b0-b04ce7083d7a/ovn-controller/0.log" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.457420 4792 generic.go:334] "Generic (PLEG): container finished" podID="8705e1c9-d503-400f-93b0-b04ce7083d7a" containerID="872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d" exitCode=0 Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.457498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d"} Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.457507 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.457541 4792 scope.go:117] "RemoveContainer" containerID="724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.457526 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5tgtj" event={"ID":"8705e1c9-d503-400f-93b0-b04ce7083d7a","Type":"ContainerDied","Data":"fdc2e0868c660cdbd06225d555c33aac97ebe4918b407524617baa3314527acc"} Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.459266 4792 generic.go:334] "Generic (PLEG): container finished" podID="c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a" containerID="56d90b05b01d2ddeeb66c1b5cf7dcc3bbdfbd131c4b726f90ee20624d90742a6" exitCode=0 Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.459695 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerDied","Data":"56d90b05b01d2ddeeb66c1b5cf7dcc3bbdfbd131c4b726f90ee20624d90742a6"} Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.459721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"ba68ba903750f70fe6eef89f9d53413b9dd93afb9eec73968a509e13082337dc"} Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.507947 4792 scope.go:117] "RemoveContainer" containerID="1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.515733 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5tgtj"] Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.520089 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5tgtj"] Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.538492 4792 scope.go:117] "RemoveContainer" containerID="e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.556295 4792 scope.go:117] "RemoveContainer" containerID="9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.571377 4792 scope.go:117] "RemoveContainer" containerID="4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.587714 4792 scope.go:117] "RemoveContainer" containerID="872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.605882 4792 scope.go:117] "RemoveContainer" containerID="6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.624496 4792 scope.go:117] "RemoveContainer" containerID="5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.684478 4792 scope.go:117] "RemoveContainer" containerID="499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.728586 4792 scope.go:117] "RemoveContainer" containerID="724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.728927 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d\": container with ID starting with 724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d not found: ID does not exist" containerID="724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.728949 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d"} err="failed to get container status \"724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d\": rpc error: code = NotFound desc = could not find container \"724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d\": container with ID starting with 724e8892fc11ec4b44643b3aa116dcf13afdb7f3436f80168c949c44fb9a821d not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.728966 4792 scope.go:117] "RemoveContainer" containerID="1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.730978 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\": container with ID starting with 1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a not found: ID does not exist" containerID="1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.731021 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a"} err="failed to get container status \"1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\": rpc error: code = NotFound desc = could not find container \"1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a\": container with ID starting with 1f64cee880e68ecc899dc0a087eb0ad5888858e2a326d62c61edcf3e3398b28a not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.731048 4792 scope.go:117] "RemoveContainer" containerID="e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.732196 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\": container with ID starting with e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db not found: ID does not exist" containerID="e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.732225 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db"} err="failed to get container status \"e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\": rpc error: code = NotFound desc = could not find container \"e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db\": container with ID starting with e79883da2d939b765dc140b52a363fad43079a72621395b42bdd8bd0338bb7db not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.732244 4792 scope.go:117] "RemoveContainer" containerID="9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.735965 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\": container with ID starting with 9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352 not found: ID does not exist" containerID="9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.736006 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352"} err="failed to get container status \"9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\": rpc error: code = NotFound desc = could not find container \"9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352\": container with ID starting with 9688fc2237f4b18639d2736926969738be9f47a4a03be0623f9bb3094ff17352 not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.736031 4792 scope.go:117] "RemoveContainer" containerID="4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.754038 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\": container with ID starting with 4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a not found: ID does not exist" containerID="4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.754082 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a"} err="failed to get container status \"4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\": rpc error: code = NotFound desc = could not find container \"4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a\": container with ID starting with 4a438e532a2555c6f34dae4613b22eaf6328895a42e64761536c534afbe07a8a not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.754107 4792 scope.go:117] "RemoveContainer" containerID="872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.755286 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\": container with ID starting with 872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d not found: ID does not exist" containerID="872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.755309 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d"} err="failed to get container status \"872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\": rpc error: code = NotFound desc = could not find container \"872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d\": container with ID starting with 872261a3c99a53f4ab9d71c0135e79c0ae676529a7d51a3c714ddd4cb1d5879d not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.755324 4792 scope.go:117] "RemoveContainer" containerID="6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.755637 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\": container with ID starting with 6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41 not found: ID does not exist" containerID="6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.755681 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41"} err="failed to get container status \"6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\": rpc error: code = NotFound desc = could not find container \"6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41\": container with ID starting with 6ccd95139571d84dc9634ba4e0ad847cf6871d7dbaea6fc40fe72cc37938ea41 not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.755705 4792 scope.go:117] "RemoveContainer" containerID="5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.759446 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\": container with ID starting with 5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c not found: ID does not exist" containerID="5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.759473 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c"} err="failed to get container status \"5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\": rpc error: code = NotFound desc = could not find container \"5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c\": container with ID starting with 5e43b5642cc1ab3007d02d696f5ba29ad85fa7d71b78894fd7e61bed46a0ab0c not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.759487 4792 scope.go:117] "RemoveContainer" containerID="499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a" Mar 19 16:55:50 crc kubenswrapper[4792]: E0319 16:55:50.759935 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\": container with ID starting with 499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a not found: ID does not exist" containerID="499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.759952 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a"} err="failed to get container status \"499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\": rpc error: code = NotFound desc = could not find container \"499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a\": container with ID starting with 499cd639499c1aee58d12b9df8d51751e33cf8e6d9b6c24a08a613d3a15b1f8a not found: ID does not exist" Mar 19 16:55:50 crc kubenswrapper[4792]: I0319 16:55:50.800076 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v7lxx" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="registry-server" probeResult="failure" output=< Mar 19 16:55:50 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 16:55:50 crc kubenswrapper[4792]: > Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.472633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"fbd7be77a710790dde4ee9adb23095fc1b0d36e041ed4e8ff930590fc20aee02"} Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.473893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"025f46722bc6468be4a4d4c08916c06acaca01d5b5e4b8e0becc5fcb4c0388cc"} Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.473912 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"feb69426bf6fa9a2ba80f04f6dc0ceb7025a1ffb57ea2a03bb6250010a276f2c"} Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.473921 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"202e8adc35b145161625100839a792757a6cc2dc60d1bbf8344ccad7fbe21929"} Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.473929 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"38fa36b6b78f7d1f5c5bf7c8158e666916696ca6e9993b0dd441e28d7f843cd6"} Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.473949 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"f3c9243a009e344dec70d2713608cc0240127cfc8bef341879dc10f302ac20b8"} Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.674974 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-8kls8"] Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.675897 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.677790 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-xcrp6" Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.678452 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.679424 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.747351 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8705e1c9-d503-400f-93b0-b04ce7083d7a" path="/var/lib/kubelet/pods/8705e1c9-d503-400f-93b0-b04ce7083d7a/volumes" Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.800185 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2947\" (UniqueName: \"kubernetes.io/projected/179c2f97-fb0f-424d-81fe-0d6dd21be292-kube-api-access-f2947\") pod \"obo-prometheus-operator-8ff7d675-8kls8\" (UID: \"179c2f97-fb0f-424d-81fe-0d6dd21be292\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.901256 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2947\" (UniqueName: \"kubernetes.io/projected/179c2f97-fb0f-424d-81fe-0d6dd21be292-kube-api-access-f2947\") pod \"obo-prometheus-operator-8ff7d675-8kls8\" (UID: \"179c2f97-fb0f-424d-81fe-0d6dd21be292\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.938773 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2947\" (UniqueName: \"kubernetes.io/projected/179c2f97-fb0f-424d-81fe-0d6dd21be292-kube-api-access-f2947\") pod \"obo-prometheus-operator-8ff7d675-8kls8\" (UID: \"179c2f97-fb0f-424d-81fe-0d6dd21be292\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:51 crc kubenswrapper[4792]: I0319 16:55:51.992887 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.020512 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-8kls8_openshift-operators_179c2f97-fb0f-424d-81fe-0d6dd21be292_0(240d74de4650466c420738d7aa24a2073fc7aaf968ae2a996a01121124827696): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.020591 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-8kls8_openshift-operators_179c2f97-fb0f-424d-81fe-0d6dd21be292_0(240d74de4650466c420738d7aa24a2073fc7aaf968ae2a996a01121124827696): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.020627 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-8kls8_openshift-operators_179c2f97-fb0f-424d-81fe-0d6dd21be292_0(240d74de4650466c420738d7aa24a2073fc7aaf968ae2a996a01121124827696): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.020670 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-8ff7d675-8kls8_openshift-operators(179c2f97-fb0f-424d-81fe-0d6dd21be292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-8ff7d675-8kls8_openshift-operators(179c2f97-fb0f-424d-81fe-0d6dd21be292)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-8kls8_openshift-operators_179c2f97-fb0f-424d-81fe-0d6dd21be292_0(240d74de4650466c420738d7aa24a2073fc7aaf968ae2a996a01121124827696): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" podUID="179c2f97-fb0f-424d-81fe-0d6dd21be292" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.028989 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk"] Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.029743 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.031594 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.031832 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-8pqmx" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.065083 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz"] Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.066548 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.104331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/192e0659-f9b8-4855-b360-dce9a7978f38-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk\" (UID: \"192e0659-f9b8-4855-b360-dce9a7978f38\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.104383 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1dc54b16-28bf-4658-91c0-5f0db7405082-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz\" (UID: \"1dc54b16-28bf-4658-91c0-5f0db7405082\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.104399 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/192e0659-f9b8-4855-b360-dce9a7978f38-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk\" (UID: \"192e0659-f9b8-4855-b360-dce9a7978f38\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.104573 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1dc54b16-28bf-4658-91c0-5f0db7405082-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz\" (UID: \"1dc54b16-28bf-4658-91c0-5f0db7405082\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.205641 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/192e0659-f9b8-4855-b360-dce9a7978f38-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk\" (UID: \"192e0659-f9b8-4855-b360-dce9a7978f38\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.205697 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/192e0659-f9b8-4855-b360-dce9a7978f38-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk\" (UID: \"192e0659-f9b8-4855-b360-dce9a7978f38\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.205713 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1dc54b16-28bf-4658-91c0-5f0db7405082-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz\" (UID: \"1dc54b16-28bf-4658-91c0-5f0db7405082\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.206313 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1dc54b16-28bf-4658-91c0-5f0db7405082-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz\" (UID: \"1dc54b16-28bf-4658-91c0-5f0db7405082\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.210201 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1dc54b16-28bf-4658-91c0-5f0db7405082-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz\" (UID: \"1dc54b16-28bf-4658-91c0-5f0db7405082\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.214481 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/192e0659-f9b8-4855-b360-dce9a7978f38-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk\" (UID: \"192e0659-f9b8-4855-b360-dce9a7978f38\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.214955 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1dc54b16-28bf-4658-91c0-5f0db7405082-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz\" (UID: \"1dc54b16-28bf-4658-91c0-5f0db7405082\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.215451 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/192e0659-f9b8-4855-b360-dce9a7978f38-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk\" (UID: \"192e0659-f9b8-4855-b360-dce9a7978f38\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.322866 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-625pf"] Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.323574 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.327043 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rvmf2" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.327217 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.358144 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.379196 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.379694 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators_192e0659-f9b8-4855-b360-dce9a7978f38_0(ba388eb53ef348fb9030e12e4cfd7fd92c7a8193715628e90569df604edc554e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.379761 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators_192e0659-f9b8-4855-b360-dce9a7978f38_0(ba388eb53ef348fb9030e12e4cfd7fd92c7a8193715628e90569df604edc554e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.379803 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators_192e0659-f9b8-4855-b360-dce9a7978f38_0(ba388eb53ef348fb9030e12e4cfd7fd92c7a8193715628e90569df604edc554e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.379861 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators(192e0659-f9b8-4855-b360-dce9a7978f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators(192e0659-f9b8-4855-b360-dce9a7978f38)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators_192e0659-f9b8-4855-b360-dce9a7978f38_0(ba388eb53ef348fb9030e12e4cfd7fd92c7a8193715628e90569df604edc554e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" podUID="192e0659-f9b8-4855-b360-dce9a7978f38" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.400518 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators_1dc54b16-28bf-4658-91c0-5f0db7405082_0(ae6d4bd8812e3ca1be8ed97b07a81f293df65bb58b22f251e95d9e62b3023d56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.400580 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators_1dc54b16-28bf-4658-91c0-5f0db7405082_0(ae6d4bd8812e3ca1be8ed97b07a81f293df65bb58b22f251e95d9e62b3023d56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.400601 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators_1dc54b16-28bf-4658-91c0-5f0db7405082_0(ae6d4bd8812e3ca1be8ed97b07a81f293df65bb58b22f251e95d9e62b3023d56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.400646 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators(1dc54b16-28bf-4658-91c0-5f0db7405082)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators(1dc54b16-28bf-4658-91c0-5f0db7405082)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators_1dc54b16-28bf-4658-91c0-5f0db7405082_0(ae6d4bd8812e3ca1be8ed97b07a81f293df65bb58b22f251e95d9e62b3023d56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" podUID="1dc54b16-28bf-4658-91c0-5f0db7405082" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.409399 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjgc\" (UniqueName: \"kubernetes.io/projected/7f7fc8f3-521e-42a6-95e0-18f42faf92c4-kube-api-access-xqjgc\") pod \"observability-operator-6dd7dd855f-625pf\" (UID: \"7f7fc8f3-521e-42a6-95e0-18f42faf92c4\") " pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.409452 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f7fc8f3-521e-42a6-95e0-18f42faf92c4-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-625pf\" (UID: \"7f7fc8f3-521e-42a6-95e0-18f42faf92c4\") " pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.511131 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjgc\" (UniqueName: \"kubernetes.io/projected/7f7fc8f3-521e-42a6-95e0-18f42faf92c4-kube-api-access-xqjgc\") pod \"observability-operator-6dd7dd855f-625pf\" (UID: \"7f7fc8f3-521e-42a6-95e0-18f42faf92c4\") " pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.511186 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f7fc8f3-521e-42a6-95e0-18f42faf92c4-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-625pf\" (UID: \"7f7fc8f3-521e-42a6-95e0-18f42faf92c4\") " pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.518011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f7fc8f3-521e-42a6-95e0-18f42faf92c4-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-625pf\" (UID: \"7f7fc8f3-521e-42a6-95e0-18f42faf92c4\") " pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.526311 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjgc\" (UniqueName: \"kubernetes.io/projected/7f7fc8f3-521e-42a6-95e0-18f42faf92c4-kube-api-access-xqjgc\") pod \"observability-operator-6dd7dd855f-625pf\" (UID: \"7f7fc8f3-521e-42a6-95e0-18f42faf92c4\") " pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.638913 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.668663 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-625pf_openshift-operators_7f7fc8f3-521e-42a6-95e0-18f42faf92c4_0(85b9231603a98215e971aa23f10744a4f7e5aa874d280428f63fefa131869110): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.668871 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-625pf_openshift-operators_7f7fc8f3-521e-42a6-95e0-18f42faf92c4_0(85b9231603a98215e971aa23f10744a4f7e5aa874d280428f63fefa131869110): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.668987 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-625pf_openshift-operators_7f7fc8f3-521e-42a6-95e0-18f42faf92c4_0(85b9231603a98215e971aa23f10744a4f7e5aa874d280428f63fefa131869110): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:52 crc kubenswrapper[4792]: E0319 16:55:52.669109 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-6dd7dd855f-625pf_openshift-operators(7f7fc8f3-521e-42a6-95e0-18f42faf92c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-6dd7dd855f-625pf_openshift-operators(7f7fc8f3-521e-42a6-95e0-18f42faf92c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-625pf_openshift-operators_7f7fc8f3-521e-42a6-95e0-18f42faf92c4_0(85b9231603a98215e971aa23f10744a4f7e5aa874d280428f63fefa131869110): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.696283 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5b64d67795-hhzt7"] Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.697105 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.702457 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.702596 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-64xh5" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.713323 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-apiservice-cert\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.713375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-webhook-cert\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.713437 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-openshift-service-ca\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.713509 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lggp\" (UniqueName: \"kubernetes.io/projected/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-kube-api-access-8lggp\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.815007 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-apiservice-cert\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.815094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-webhook-cert\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.815151 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-openshift-service-ca\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.815204 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lggp\" (UniqueName: \"kubernetes.io/projected/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-kube-api-access-8lggp\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.816389 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-openshift-service-ca\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.820393 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-apiservice-cert\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.822383 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-webhook-cert\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:52 crc kubenswrapper[4792]: I0319 16:55:52.846582 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lggp\" (UniqueName: \"kubernetes.io/projected/3477a59c-705b-42e9-bf3e-6ec92fecfc9e-kube-api-access-8lggp\") pod \"perses-operator-5b64d67795-hhzt7\" (UID: \"3477a59c-705b-42e9-bf3e-6ec92fecfc9e\") " pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:53 crc kubenswrapper[4792]: I0319 16:55:53.015134 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:53 crc kubenswrapper[4792]: E0319 16:55:53.047950 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5b64d67795-hhzt7_openshift-operators_3477a59c-705b-42e9-bf3e-6ec92fecfc9e_0(cdeee3e1f33d82969fb8d4d8517af2ad5899df5c7719d4fcbad44333bb0b881d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:53 crc kubenswrapper[4792]: E0319 16:55:53.048225 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5b64d67795-hhzt7_openshift-operators_3477a59c-705b-42e9-bf3e-6ec92fecfc9e_0(cdeee3e1f33d82969fb8d4d8517af2ad5899df5c7719d4fcbad44333bb0b881d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:53 crc kubenswrapper[4792]: E0319 16:55:53.048254 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5b64d67795-hhzt7_openshift-operators_3477a59c-705b-42e9-bf3e-6ec92fecfc9e_0(cdeee3e1f33d82969fb8d4d8517af2ad5899df5c7719d4fcbad44333bb0b881d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:53 crc kubenswrapper[4792]: E0319 16:55:53.048319 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5b64d67795-hhzt7_openshift-operators(3477a59c-705b-42e9-bf3e-6ec92fecfc9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5b64d67795-hhzt7_openshift-operators(3477a59c-705b-42e9-bf3e-6ec92fecfc9e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5b64d67795-hhzt7_openshift-operators_3477a59c-705b-42e9-bf3e-6ec92fecfc9e_0(cdeee3e1f33d82969fb8d4d8517af2ad5899df5c7719d4fcbad44333bb0b881d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" podUID="3477a59c-705b-42e9-bf3e-6ec92fecfc9e" Mar 19 16:55:54 crc kubenswrapper[4792]: I0319 16:55:54.493308 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"84bde02944754e0acb2159cb14ae23b9a7dfdeba769d25ed91861f5a971865cf"} Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.511644 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" event={"ID":"c4d928ec-dfc7-4f20-99a4-1c89a0eb0c2a","Type":"ContainerStarted","Data":"799e3218a977c60d9c57334db51e637a26747cf857ae98fb3d8023a34caa4dd2"} Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.512169 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.512184 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.550151 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.550190 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" podStartSLOduration=7.550174736 podStartE2EDuration="7.550174736s" podCreationTimestamp="2026-03-19 16:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 16:55:56.546759322 +0000 UTC m=+919.692816862" watchObservedRunningTime="2026-03-19 16:55:56.550174736 +0000 UTC m=+919.696232276" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.680633 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-8kls8"] Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.680731 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.681153 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.685760 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk"] Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.685881 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.689112 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.710648 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-8kls8_openshift-operators_179c2f97-fb0f-424d-81fe-0d6dd21be292_0(1a482899b1251ddd4765c122cc53a66b4f32fe78901232bd5b812b5d819e0860): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.710714 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-8kls8_openshift-operators_179c2f97-fb0f-424d-81fe-0d6dd21be292_0(1a482899b1251ddd4765c122cc53a66b4f32fe78901232bd5b812b5d819e0860): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.710736 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-8kls8_openshift-operators_179c2f97-fb0f-424d-81fe-0d6dd21be292_0(1a482899b1251ddd4765c122cc53a66b4f32fe78901232bd5b812b5d819e0860): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.710776 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-8ff7d675-8kls8_openshift-operators(179c2f97-fb0f-424d-81fe-0d6dd21be292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-8ff7d675-8kls8_openshift-operators(179c2f97-fb0f-424d-81fe-0d6dd21be292)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-8kls8_openshift-operators_179c2f97-fb0f-424d-81fe-0d6dd21be292_0(1a482899b1251ddd4765c122cc53a66b4f32fe78901232bd5b812b5d819e0860): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" podUID="179c2f97-fb0f-424d-81fe-0d6dd21be292" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.711886 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-625pf"] Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.712011 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.712426 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.736298 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5b64d67795-hhzt7"] Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.736437 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.736961 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.742737 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators_192e0659-f9b8-4855-b360-dce9a7978f38_0(205f175fa0fca2b4a098e8dcc6178e8fe2455a19b39a65d4dd4dfe78357c4cda): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.742783 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators_192e0659-f9b8-4855-b360-dce9a7978f38_0(205f175fa0fca2b4a098e8dcc6178e8fe2455a19b39a65d4dd4dfe78357c4cda): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.742800 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators_192e0659-f9b8-4855-b360-dce9a7978f38_0(205f175fa0fca2b4a098e8dcc6178e8fe2455a19b39a65d4dd4dfe78357c4cda): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.742832 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators(192e0659-f9b8-4855-b360-dce9a7978f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators(192e0659-f9b8-4855-b360-dce9a7978f38)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_openshift-operators_192e0659-f9b8-4855-b360-dce9a7978f38_0(205f175fa0fca2b4a098e8dcc6178e8fe2455a19b39a65d4dd4dfe78357c4cda): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" podUID="192e0659-f9b8-4855-b360-dce9a7978f38" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.779975 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-625pf_openshift-operators_7f7fc8f3-521e-42a6-95e0-18f42faf92c4_0(bc4a03425ca08e5fb825c1e373c29af1d23ed75ff0cd764483b17d0b96659849): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.780038 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-625pf_openshift-operators_7f7fc8f3-521e-42a6-95e0-18f42faf92c4_0(bc4a03425ca08e5fb825c1e373c29af1d23ed75ff0cd764483b17d0b96659849): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.780060 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-625pf_openshift-operators_7f7fc8f3-521e-42a6-95e0-18f42faf92c4_0(bc4a03425ca08e5fb825c1e373c29af1d23ed75ff0cd764483b17d0b96659849): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.780100 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-6dd7dd855f-625pf_openshift-operators(7f7fc8f3-521e-42a6-95e0-18f42faf92c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-6dd7dd855f-625pf_openshift-operators(7f7fc8f3-521e-42a6-95e0-18f42faf92c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-625pf_openshift-operators_7f7fc8f3-521e-42a6-95e0-18f42faf92c4_0(bc4a03425ca08e5fb825c1e373c29af1d23ed75ff0cd764483b17d0b96659849): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.795239 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5b64d67795-hhzt7_openshift-operators_3477a59c-705b-42e9-bf3e-6ec92fecfc9e_0(56ea4ee4b282258ccd0715cf416f61248d20b971fdfec79cd16fdcc22f57d85b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.795295 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5b64d67795-hhzt7_openshift-operators_3477a59c-705b-42e9-bf3e-6ec92fecfc9e_0(56ea4ee4b282258ccd0715cf416f61248d20b971fdfec79cd16fdcc22f57d85b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.795316 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5b64d67795-hhzt7_openshift-operators_3477a59c-705b-42e9-bf3e-6ec92fecfc9e_0(56ea4ee4b282258ccd0715cf416f61248d20b971fdfec79cd16fdcc22f57d85b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.795354 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5b64d67795-hhzt7_openshift-operators(3477a59c-705b-42e9-bf3e-6ec92fecfc9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5b64d67795-hhzt7_openshift-operators(3477a59c-705b-42e9-bf3e-6ec92fecfc9e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5b64d67795-hhzt7_openshift-operators_3477a59c-705b-42e9-bf3e-6ec92fecfc9e_0(56ea4ee4b282258ccd0715cf416f61248d20b971fdfec79cd16fdcc22f57d85b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" podUID="3477a59c-705b-42e9-bf3e-6ec92fecfc9e" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.807767 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz"] Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.807875 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:56 crc kubenswrapper[4792]: I0319 16:55:56.808356 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.877969 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators_1dc54b16-28bf-4658-91c0-5f0db7405082_0(5b3e23ca9953bfdd7c76582dc6b81d48915db6758664f7f9a0f33627cf5929ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.878035 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators_1dc54b16-28bf-4658-91c0-5f0db7405082_0(5b3e23ca9953bfdd7c76582dc6b81d48915db6758664f7f9a0f33627cf5929ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.878061 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators_1dc54b16-28bf-4658-91c0-5f0db7405082_0(5b3e23ca9953bfdd7c76582dc6b81d48915db6758664f7f9a0f33627cf5929ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:55:56 crc kubenswrapper[4792]: E0319 16:55:56.878106 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators(1dc54b16-28bf-4658-91c0-5f0db7405082)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators(1dc54b16-28bf-4658-91c0-5f0db7405082)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_openshift-operators_1dc54b16-28bf-4658-91c0-5f0db7405082_0(5b3e23ca9953bfdd7c76582dc6b81d48915db6758664f7f9a0f33627cf5929ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" podUID="1dc54b16-28bf-4658-91c0-5f0db7405082" Mar 19 16:55:57 crc kubenswrapper[4792]: I0319 16:55:57.517697 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:57 crc kubenswrapper[4792]: I0319 16:55:57.546820 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:55:59 crc kubenswrapper[4792]: I0319 16:55:59.777764 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:55:59 crc kubenswrapper[4792]: I0319 16:55:59.821079 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.125011 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565656-7gnrl"] Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.126076 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565656-7gnrl" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.128438 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.128651 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.129150 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.131609 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565656-7gnrl"] Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.212580 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mh8f\" (UniqueName: \"kubernetes.io/projected/6feb904a-aaa6-415b-9df2-e29655226c0b-kube-api-access-5mh8f\") pod \"auto-csr-approver-29565656-7gnrl\" (UID: \"6feb904a-aaa6-415b-9df2-e29655226c0b\") " pod="openshift-infra/auto-csr-approver-29565656-7gnrl" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.314346 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mh8f\" (UniqueName: \"kubernetes.io/projected/6feb904a-aaa6-415b-9df2-e29655226c0b-kube-api-access-5mh8f\") pod \"auto-csr-approver-29565656-7gnrl\" (UID: \"6feb904a-aaa6-415b-9df2-e29655226c0b\") " pod="openshift-infra/auto-csr-approver-29565656-7gnrl" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.339043 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mh8f\" (UniqueName: \"kubernetes.io/projected/6feb904a-aaa6-415b-9df2-e29655226c0b-kube-api-access-5mh8f\") pod \"auto-csr-approver-29565656-7gnrl\" (UID: \"6feb904a-aaa6-415b-9df2-e29655226c0b\") " pod="openshift-infra/auto-csr-approver-29565656-7gnrl" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.441442 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565656-7gnrl" Mar 19 16:56:00 crc kubenswrapper[4792]: I0319 16:56:00.904667 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565656-7gnrl"] Mar 19 16:56:00 crc kubenswrapper[4792]: W0319 16:56:00.908295 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6feb904a_aaa6_415b_9df2_e29655226c0b.slice/crio-392ebe8def41c06ac2f0a177ff5c7648ffc152008eec5978754ed5514203e212 WatchSource:0}: Error finding container 392ebe8def41c06ac2f0a177ff5c7648ffc152008eec5978754ed5514203e212: Status 404 returned error can't find the container with id 392ebe8def41c06ac2f0a177ff5c7648ffc152008eec5978754ed5514203e212 Mar 19 16:56:01 crc kubenswrapper[4792]: I0319 16:56:01.541196 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565656-7gnrl" event={"ID":"6feb904a-aaa6-415b-9df2-e29655226c0b","Type":"ContainerStarted","Data":"392ebe8def41c06ac2f0a177ff5c7648ffc152008eec5978754ed5514203e212"} Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.200598 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v7lxx"] Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.201002 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v7lxx" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="registry-server" containerID="cri-o://c153532e4243b3aa6ec4fb131bfe0f6fffb0f086fe91b842e97f99acf3297491" gracePeriod=2 Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.547774 4792 generic.go:334] "Generic (PLEG): container finished" podID="d1873bdc-0966-4413-88a8-95d1e1156839" containerID="c153532e4243b3aa6ec4fb131bfe0f6fffb0f086fe91b842e97f99acf3297491" exitCode=0 Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.547919 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7lxx" event={"ID":"d1873bdc-0966-4413-88a8-95d1e1156839","Type":"ContainerDied","Data":"c153532e4243b3aa6ec4fb131bfe0f6fffb0f086fe91b842e97f99acf3297491"} Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.549617 4792 generic.go:334] "Generic (PLEG): container finished" podID="6feb904a-aaa6-415b-9df2-e29655226c0b" containerID="2c97fee6086388388932e3291e13b8fc38b2d1da10887a07cd6ea03e7f06d6a0" exitCode=0 Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.549650 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565656-7gnrl" event={"ID":"6feb904a-aaa6-415b-9df2-e29655226c0b","Type":"ContainerDied","Data":"2c97fee6086388388932e3291e13b8fc38b2d1da10887a07cd6ea03e7f06d6a0"} Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.586190 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.755328 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-utilities\") pod \"d1873bdc-0966-4413-88a8-95d1e1156839\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.755430 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn2b6\" (UniqueName: \"kubernetes.io/projected/d1873bdc-0966-4413-88a8-95d1e1156839-kube-api-access-vn2b6\") pod \"d1873bdc-0966-4413-88a8-95d1e1156839\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.755461 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-catalog-content\") pod \"d1873bdc-0966-4413-88a8-95d1e1156839\" (UID: \"d1873bdc-0966-4413-88a8-95d1e1156839\") " Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.756652 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-utilities" (OuterVolumeSpecName: "utilities") pod "d1873bdc-0966-4413-88a8-95d1e1156839" (UID: "d1873bdc-0966-4413-88a8-95d1e1156839"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.766311 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1873bdc-0966-4413-88a8-95d1e1156839-kube-api-access-vn2b6" (OuterVolumeSpecName: "kube-api-access-vn2b6") pod "d1873bdc-0966-4413-88a8-95d1e1156839" (UID: "d1873bdc-0966-4413-88a8-95d1e1156839"). InnerVolumeSpecName "kube-api-access-vn2b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.856786 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn2b6\" (UniqueName: \"kubernetes.io/projected/d1873bdc-0966-4413-88a8-95d1e1156839-kube-api-access-vn2b6\") on node \"crc\" DevicePath \"\"" Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.856815 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.881515 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1873bdc-0966-4413-88a8-95d1e1156839" (UID: "d1873bdc-0966-4413-88a8-95d1e1156839"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:56:02 crc kubenswrapper[4792]: I0319 16:56:02.958349 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1873bdc-0966-4413-88a8-95d1e1156839-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.556684 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7lxx" event={"ID":"d1873bdc-0966-4413-88a8-95d1e1156839","Type":"ContainerDied","Data":"d09f47c36e957c90c179fb74d7f184b587234218c00b5c104fc1a2deb80d2bcc"} Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.556732 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7lxx" Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.556754 4792 scope.go:117] "RemoveContainer" containerID="c153532e4243b3aa6ec4fb131bfe0f6fffb0f086fe91b842e97f99acf3297491" Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.571020 4792 scope.go:117] "RemoveContainer" containerID="f25c4663aff383702411a76c2caf6f971a14c55205a9183bada1463664155635" Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.602889 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v7lxx"] Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.606603 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v7lxx"] Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.617403 4792 scope.go:117] "RemoveContainer" containerID="a1ecbb13beac8dc2642079549bf244fe303dd43b5f29d0714b02a2c9275e7f9a" Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.762758 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" path="/var/lib/kubelet/pods/d1873bdc-0966-4413-88a8-95d1e1156839/volumes" Mar 19 16:56:03 crc kubenswrapper[4792]: I0319 16:56:03.940660 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565656-7gnrl" Mar 19 16:56:04 crc kubenswrapper[4792]: I0319 16:56:04.073634 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mh8f\" (UniqueName: \"kubernetes.io/projected/6feb904a-aaa6-415b-9df2-e29655226c0b-kube-api-access-5mh8f\") pod \"6feb904a-aaa6-415b-9df2-e29655226c0b\" (UID: \"6feb904a-aaa6-415b-9df2-e29655226c0b\") " Mar 19 16:56:04 crc kubenswrapper[4792]: I0319 16:56:04.080022 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6feb904a-aaa6-415b-9df2-e29655226c0b-kube-api-access-5mh8f" (OuterVolumeSpecName: "kube-api-access-5mh8f") pod "6feb904a-aaa6-415b-9df2-e29655226c0b" (UID: "6feb904a-aaa6-415b-9df2-e29655226c0b"). InnerVolumeSpecName "kube-api-access-5mh8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:56:04 crc kubenswrapper[4792]: I0319 16:56:04.176106 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mh8f\" (UniqueName: \"kubernetes.io/projected/6feb904a-aaa6-415b-9df2-e29655226c0b-kube-api-access-5mh8f\") on node \"crc\" DevicePath \"\"" Mar 19 16:56:04 crc kubenswrapper[4792]: I0319 16:56:04.570062 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565656-7gnrl" event={"ID":"6feb904a-aaa6-415b-9df2-e29655226c0b","Type":"ContainerDied","Data":"392ebe8def41c06ac2f0a177ff5c7648ffc152008eec5978754ed5514203e212"} Mar 19 16:56:04 crc kubenswrapper[4792]: I0319 16:56:04.570105 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="392ebe8def41c06ac2f0a177ff5c7648ffc152008eec5978754ed5514203e212" Mar 19 16:56:04 crc kubenswrapper[4792]: I0319 16:56:04.570161 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565656-7gnrl" Mar 19 16:56:04 crc kubenswrapper[4792]: I0319 16:56:04.988856 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565650-xpvhz"] Mar 19 16:56:04 crc kubenswrapper[4792]: I0319 16:56:04.993432 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565650-xpvhz"] Mar 19 16:56:05 crc kubenswrapper[4792]: I0319 16:56:05.746146 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0372387e-f9c8-4045-8bca-c878cba6b38b" path="/var/lib/kubelet/pods/0372387e-f9c8-4045-8bca-c878cba6b38b/volumes" Mar 19 16:56:07 crc kubenswrapper[4792]: I0319 16:56:07.739102 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:56:07 crc kubenswrapper[4792]: I0319 16:56:07.742904 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" Mar 19 16:56:07 crc kubenswrapper[4792]: I0319 16:56:07.963875 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz"] Mar 19 16:56:07 crc kubenswrapper[4792]: W0319 16:56:07.971021 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dc54b16_28bf_4658_91c0_5f0db7405082.slice/crio-fe023d592590111a924739863d19b92c757550039410344eefdcb8c7adec45a9 WatchSource:0}: Error finding container fe023d592590111a924739863d19b92c757550039410344eefdcb8c7adec45a9: Status 404 returned error can't find the container with id fe023d592590111a924739863d19b92c757550039410344eefdcb8c7adec45a9 Mar 19 16:56:08 crc kubenswrapper[4792]: I0319 16:56:08.607547 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" event={"ID":"1dc54b16-28bf-4658-91c0-5f0db7405082","Type":"ContainerStarted","Data":"fe023d592590111a924739863d19b92c757550039410344eefdcb8c7adec45a9"} Mar 19 16:56:10 crc kubenswrapper[4792]: I0319 16:56:10.739568 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:56:10 crc kubenswrapper[4792]: I0319 16:56:10.739634 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:56:10 crc kubenswrapper[4792]: I0319 16:56:10.739799 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:56:10 crc kubenswrapper[4792]: I0319 16:56:10.740474 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:56:10 crc kubenswrapper[4792]: I0319 16:56:10.740483 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" Mar 19 16:56:10 crc kubenswrapper[4792]: I0319 16:56:10.740659 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" Mar 19 16:56:11 crc kubenswrapper[4792]: I0319 16:56:11.217696 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-625pf"] Mar 19 16:56:11 crc kubenswrapper[4792]: W0319 16:56:11.227977 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f7fc8f3_521e_42a6_95e0_18f42faf92c4.slice/crio-ea682aa445f10f5e71d8f03fd01e92a48bf0a8f92941533dccd08320f5480b57 WatchSource:0}: Error finding container ea682aa445f10f5e71d8f03fd01e92a48bf0a8f92941533dccd08320f5480b57: Status 404 returned error can't find the container with id ea682aa445f10f5e71d8f03fd01e92a48bf0a8f92941533dccd08320f5480b57 Mar 19 16:56:11 crc kubenswrapper[4792]: I0319 16:56:11.287262 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk"] Mar 19 16:56:11 crc kubenswrapper[4792]: I0319 16:56:11.334058 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-8kls8"] Mar 19 16:56:11 crc kubenswrapper[4792]: W0319 16:56:11.345150 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod179c2f97_fb0f_424d_81fe_0d6dd21be292.slice/crio-4f43f34615c477aa7dc9471e789daecfd6332b43e2948887a13392131f98d1be WatchSource:0}: Error finding container 4f43f34615c477aa7dc9471e789daecfd6332b43e2948887a13392131f98d1be: Status 404 returned error can't find the container with id 4f43f34615c477aa7dc9471e789daecfd6332b43e2948887a13392131f98d1be Mar 19 16:56:11 crc kubenswrapper[4792]: I0319 16:56:11.630989 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" event={"ID":"7f7fc8f3-521e-42a6-95e0-18f42faf92c4","Type":"ContainerStarted","Data":"ea682aa445f10f5e71d8f03fd01e92a48bf0a8f92941533dccd08320f5480b57"} Mar 19 16:56:11 crc kubenswrapper[4792]: I0319 16:56:11.632365 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" event={"ID":"192e0659-f9b8-4855-b360-dce9a7978f38","Type":"ContainerStarted","Data":"f2aa33b432481fe0e35e05c075e28cd0f7a60890cfaeba7d3c22ab1535c7e350"} Mar 19 16:56:11 crc kubenswrapper[4792]: I0319 16:56:11.633440 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" event={"ID":"179c2f97-fb0f-424d-81fe-0d6dd21be292","Type":"ContainerStarted","Data":"4f43f34615c477aa7dc9471e789daecfd6332b43e2948887a13392131f98d1be"} Mar 19 16:56:11 crc kubenswrapper[4792]: I0319 16:56:11.739644 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:56:11 crc kubenswrapper[4792]: I0319 16:56:11.740169 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:56:12 crc kubenswrapper[4792]: I0319 16:56:12.235915 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5b64d67795-hhzt7"] Mar 19 16:56:12 crc kubenswrapper[4792]: I0319 16:56:12.663828 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" event={"ID":"3477a59c-705b-42e9-bf3e-6ec92fecfc9e","Type":"ContainerStarted","Data":"7ad94f8092361f89126505d7003465b7644f3e57a8cd35ea35319a30b1f5c2c8"} Mar 19 16:56:15 crc kubenswrapper[4792]: I0319 16:56:15.697653 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" event={"ID":"192e0659-f9b8-4855-b360-dce9a7978f38","Type":"ContainerStarted","Data":"3342d18219cbd32ef22ac96c04f4c3a3ee39640d73770bfa696ece4a347508e7"} Mar 19 16:56:15 crc kubenswrapper[4792]: I0319 16:56:15.700863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" event={"ID":"1dc54b16-28bf-4658-91c0-5f0db7405082","Type":"ContainerStarted","Data":"febb56e43775dd95c7794a47653814435508f2d9e49536da9e0ac248f0c5b5b5"} Mar 19 16:56:15 crc kubenswrapper[4792]: I0319 16:56:15.717523 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk" podStartSLOduration=19.916643709 podStartE2EDuration="23.717508289s" podCreationTimestamp="2026-03-19 16:55:52 +0000 UTC" firstStartedPulling="2026-03-19 16:56:11.303033489 +0000 UTC m=+934.449091029" lastFinishedPulling="2026-03-19 16:56:15.103898069 +0000 UTC m=+938.249955609" observedRunningTime="2026-03-19 16:56:15.714414024 +0000 UTC m=+938.860471564" watchObservedRunningTime="2026-03-19 16:56:15.717508289 +0000 UTC m=+938.863565829" Mar 19 16:56:15 crc kubenswrapper[4792]: I0319 16:56:15.743410 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz" podStartSLOduration=16.61589923 podStartE2EDuration="23.74339107s" podCreationTimestamp="2026-03-19 16:55:52 +0000 UTC" firstStartedPulling="2026-03-19 16:56:07.972463701 +0000 UTC m=+931.118521241" lastFinishedPulling="2026-03-19 16:56:15.099955541 +0000 UTC m=+938.246013081" observedRunningTime="2026-03-19 16:56:15.737393626 +0000 UTC m=+938.883451166" watchObservedRunningTime="2026-03-19 16:56:15.74339107 +0000 UTC m=+938.889448610" Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.749907 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" event={"ID":"3477a59c-705b-42e9-bf3e-6ec92fecfc9e","Type":"ContainerStarted","Data":"020f0c5570448ed617d0b9074da8855c08706046ea4c3db2d471f2aad084b17c"} Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.750581 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.750606 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" event={"ID":"179c2f97-fb0f-424d-81fe-0d6dd21be292","Type":"ContainerStarted","Data":"9bd511ab0fb576a95a3055fe388f2174e03eb5b1b14b0340e07fb9db5e2170d3"} Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.750627 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" event={"ID":"7f7fc8f3-521e-42a6-95e0-18f42faf92c4","Type":"ContainerStarted","Data":"fd188ec05c07ed602a7a49a17e83601a9d8d17b36b4bc5f3638428c58d0da6ae"} Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.750649 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.769377 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" podStartSLOduration=21.096708941 podStartE2EDuration="27.769354062s" podCreationTimestamp="2026-03-19 16:55:52 +0000 UTC" firstStartedPulling="2026-03-19 16:56:12.271137342 +0000 UTC m=+935.417194882" lastFinishedPulling="2026-03-19 16:56:18.943782453 +0000 UTC m=+942.089840003" observedRunningTime="2026-03-19 16:56:19.768264273 +0000 UTC m=+942.914321823" watchObservedRunningTime="2026-03-19 16:56:19.769354062 +0000 UTC m=+942.915411602" Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.783034 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-8kls8" podStartSLOduration=21.173765469 podStartE2EDuration="28.783016678s" podCreationTimestamp="2026-03-19 16:55:51 +0000 UTC" firstStartedPulling="2026-03-19 16:56:11.352914278 +0000 UTC m=+934.498971808" lastFinishedPulling="2026-03-19 16:56:18.962165467 +0000 UTC m=+942.108223017" observedRunningTime="2026-03-19 16:56:19.781685361 +0000 UTC m=+942.927742891" watchObservedRunningTime="2026-03-19 16:56:19.783016678 +0000 UTC m=+942.929074218" Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.811706 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podStartSLOduration=20.079893101 podStartE2EDuration="27.811685505s" podCreationTimestamp="2026-03-19 16:55:52 +0000 UTC" firstStartedPulling="2026-03-19 16:56:11.232595984 +0000 UTC m=+934.378653524" lastFinishedPulling="2026-03-19 16:56:18.964388388 +0000 UTC m=+942.110445928" observedRunningTime="2026-03-19 16:56:19.806463922 +0000 UTC m=+942.952521462" watchObservedRunningTime="2026-03-19 16:56:19.811685505 +0000 UTC m=+942.957743065" Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.813567 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 16:56:19 crc kubenswrapper[4792]: I0319 16:56:19.985923 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xht8m" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.112719 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd"] Mar 19 16:56:26 crc kubenswrapper[4792]: E0319 16:56:26.113333 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="extract-utilities" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.113351 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="extract-utilities" Mar 19 16:56:26 crc kubenswrapper[4792]: E0319 16:56:26.113364 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="registry-server" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.113372 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="registry-server" Mar 19 16:56:26 crc kubenswrapper[4792]: E0319 16:56:26.113385 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="extract-content" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.113392 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="extract-content" Mar 19 16:56:26 crc kubenswrapper[4792]: E0319 16:56:26.113416 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6feb904a-aaa6-415b-9df2-e29655226c0b" containerName="oc" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.113423 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6feb904a-aaa6-415b-9df2-e29655226c0b" containerName="oc" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.113557 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6feb904a-aaa6-415b-9df2-e29655226c0b" containerName="oc" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.113583 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1873bdc-0966-4413-88a8-95d1e1156839" containerName="registry-server" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.114224 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.124290 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2vblw" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.124505 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.124611 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.129487 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd"] Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.135897 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-brbtt"] Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.137170 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-brbtt" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.142299 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wbr4d" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.144175 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-brbtt"] Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.153448 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bgdjc"] Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.154261 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.157216 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-n2m4m" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.176465 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bgdjc"] Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.308924 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c26nz\" (UniqueName: \"kubernetes.io/projected/6982c21c-b400-4cf9-8107-b94b0166c7e1-kube-api-access-c26nz\") pod \"cert-manager-cainjector-cf98fcc89-5pbbd\" (UID: \"6982c21c-b400-4cf9-8107-b94b0166c7e1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.308999 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqlh4\" (UniqueName: \"kubernetes.io/projected/7df83bb5-92b7-4c33-8907-29884370b54a-kube-api-access-gqlh4\") pod \"cert-manager-858654f9db-brbtt\" (UID: \"7df83bb5-92b7-4c33-8907-29884370b54a\") " pod="cert-manager/cert-manager-858654f9db-brbtt" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.309072 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzp6b\" (UniqueName: \"kubernetes.io/projected/bf8a2335-56a0-4c34-ac01-e93578bf4cbd-kube-api-access-mzp6b\") pod \"cert-manager-webhook-687f57d79b-bgdjc\" (UID: \"bf8a2335-56a0-4c34-ac01-e93578bf4cbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.410304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzp6b\" (UniqueName: \"kubernetes.io/projected/bf8a2335-56a0-4c34-ac01-e93578bf4cbd-kube-api-access-mzp6b\") pod \"cert-manager-webhook-687f57d79b-bgdjc\" (UID: \"bf8a2335-56a0-4c34-ac01-e93578bf4cbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.410395 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c26nz\" (UniqueName: \"kubernetes.io/projected/6982c21c-b400-4cf9-8107-b94b0166c7e1-kube-api-access-c26nz\") pod \"cert-manager-cainjector-cf98fcc89-5pbbd\" (UID: \"6982c21c-b400-4cf9-8107-b94b0166c7e1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.410447 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqlh4\" (UniqueName: \"kubernetes.io/projected/7df83bb5-92b7-4c33-8907-29884370b54a-kube-api-access-gqlh4\") pod \"cert-manager-858654f9db-brbtt\" (UID: \"7df83bb5-92b7-4c33-8907-29884370b54a\") " pod="cert-manager/cert-manager-858654f9db-brbtt" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.430194 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqlh4\" (UniqueName: \"kubernetes.io/projected/7df83bb5-92b7-4c33-8907-29884370b54a-kube-api-access-gqlh4\") pod \"cert-manager-858654f9db-brbtt\" (UID: \"7df83bb5-92b7-4c33-8907-29884370b54a\") " pod="cert-manager/cert-manager-858654f9db-brbtt" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.432184 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c26nz\" (UniqueName: \"kubernetes.io/projected/6982c21c-b400-4cf9-8107-b94b0166c7e1-kube-api-access-c26nz\") pod \"cert-manager-cainjector-cf98fcc89-5pbbd\" (UID: \"6982c21c-b400-4cf9-8107-b94b0166c7e1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.435397 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzp6b\" (UniqueName: \"kubernetes.io/projected/bf8a2335-56a0-4c34-ac01-e93578bf4cbd-kube-api-access-mzp6b\") pod \"cert-manager-webhook-687f57d79b-bgdjc\" (UID: \"bf8a2335-56a0-4c34-ac01-e93578bf4cbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.443390 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.460331 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-brbtt" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.474478 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.778639 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-brbtt"] Mar 19 16:56:26 crc kubenswrapper[4792]: W0319 16:56:26.790121 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7df83bb5_92b7_4c33_8907_29884370b54a.slice/crio-941f59dde35c0e84294257390013974f14b13cba6dc889dca0754772a8ac6024 WatchSource:0}: Error finding container 941f59dde35c0e84294257390013974f14b13cba6dc889dca0754772a8ac6024: Status 404 returned error can't find the container with id 941f59dde35c0e84294257390013974f14b13cba6dc889dca0754772a8ac6024 Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.829075 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bgdjc"] Mar 19 16:56:26 crc kubenswrapper[4792]: I0319 16:56:26.982779 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd"] Mar 19 16:56:27 crc kubenswrapper[4792]: I0319 16:56:27.807275 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd" event={"ID":"6982c21c-b400-4cf9-8107-b94b0166c7e1","Type":"ContainerStarted","Data":"ada5fa176cc6aac9d1c6265ce618ebb34bad7ba9b3422e13d5619ee520329ed9"} Mar 19 16:56:27 crc kubenswrapper[4792]: I0319 16:56:27.808480 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" event={"ID":"bf8a2335-56a0-4c34-ac01-e93578bf4cbd","Type":"ContainerStarted","Data":"21907f2cf0354dbf6b7cd54aacb15707ff8eebd13922f4cb7d31b65ac8a60d77"} Mar 19 16:56:27 crc kubenswrapper[4792]: I0319 16:56:27.810328 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-brbtt" event={"ID":"7df83bb5-92b7-4c33-8907-29884370b54a","Type":"ContainerStarted","Data":"941f59dde35c0e84294257390013974f14b13cba6dc889dca0754772a8ac6024"} Mar 19 16:56:30 crc kubenswrapper[4792]: I0319 16:56:30.831503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-brbtt" event={"ID":"7df83bb5-92b7-4c33-8907-29884370b54a","Type":"ContainerStarted","Data":"8c5dc503bb93becd1e386b253e7db175903983217c25deb32579d2a066f9dcd1"} Mar 19 16:56:30 crc kubenswrapper[4792]: I0319 16:56:30.838453 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd" event={"ID":"6982c21c-b400-4cf9-8107-b94b0166c7e1","Type":"ContainerStarted","Data":"2f1fabcaa74e976f87da6bc755d076dc3a9cff8affeeb5f0b5fc49898df422fc"} Mar 19 16:56:30 crc kubenswrapper[4792]: I0319 16:56:30.839849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" event={"ID":"bf8a2335-56a0-4c34-ac01-e93578bf4cbd","Type":"ContainerStarted","Data":"716f0a0c682956b86df34501b8ac23fec8aac85d02ebee1be5f9ac81bdbae970"} Mar 19 16:56:30 crc kubenswrapper[4792]: I0319 16:56:30.839979 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 16:56:30 crc kubenswrapper[4792]: I0319 16:56:30.861965 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-brbtt" podStartSLOduration=1.284040946 podStartE2EDuration="4.861943785s" podCreationTimestamp="2026-03-19 16:56:26 +0000 UTC" firstStartedPulling="2026-03-19 16:56:26.791675665 +0000 UTC m=+949.937733205" lastFinishedPulling="2026-03-19 16:56:30.369578504 +0000 UTC m=+953.515636044" observedRunningTime="2026-03-19 16:56:30.859053505 +0000 UTC m=+954.005111045" watchObservedRunningTime="2026-03-19 16:56:30.861943785 +0000 UTC m=+954.008001325" Mar 19 16:56:30 crc kubenswrapper[4792]: I0319 16:56:30.883187 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podStartSLOduration=1.308310201 podStartE2EDuration="4.883169907s" podCreationTimestamp="2026-03-19 16:56:26 +0000 UTC" firstStartedPulling="2026-03-19 16:56:26.843611561 +0000 UTC m=+949.989669101" lastFinishedPulling="2026-03-19 16:56:30.418471267 +0000 UTC m=+953.564528807" observedRunningTime="2026-03-19 16:56:30.878651723 +0000 UTC m=+954.024709263" watchObservedRunningTime="2026-03-19 16:56:30.883169907 +0000 UTC m=+954.029227437" Mar 19 16:56:33 crc kubenswrapper[4792]: I0319 16:56:33.018142 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 16:56:33 crc kubenswrapper[4792]: I0319 16:56:33.045488 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5pbbd" podStartSLOduration=3.6425288890000003 podStartE2EDuration="7.045467643s" podCreationTimestamp="2026-03-19 16:56:26 +0000 UTC" firstStartedPulling="2026-03-19 16:56:26.994591738 +0000 UTC m=+950.140649278" lastFinishedPulling="2026-03-19 16:56:30.397530492 +0000 UTC m=+953.543588032" observedRunningTime="2026-03-19 16:56:30.912088812 +0000 UTC m=+954.058146352" watchObservedRunningTime="2026-03-19 16:56:33.045467643 +0000 UTC m=+956.191525183" Mar 19 16:56:36 crc kubenswrapper[4792]: I0319 16:56:36.477576 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 16:56:39 crc kubenswrapper[4792]: I0319 16:56:39.632030 4792 scope.go:117] "RemoveContainer" containerID="4bc1ba345466e133470c2f62705e1feb549dc29da0278aa9bbc39d2ef7978c03" Mar 19 16:56:50 crc kubenswrapper[4792]: I0319 16:56:50.231169 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:56:50 crc kubenswrapper[4792]: I0319 16:56:50.231770 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.091438 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7"] Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.093042 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.096354 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.103422 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7"] Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.288345 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.288856 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.288876 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4xv\" (UniqueName: \"kubernetes.io/projected/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-kube-api-access-nx4xv\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.390089 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.390215 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.390247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx4xv\" (UniqueName: \"kubernetes.io/projected/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-kube-api-access-nx4xv\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.391228 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.391303 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.410324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx4xv\" (UniqueName: \"kubernetes.io/projected/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-kube-api-access-nx4xv\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.418325 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:56:58 crc kubenswrapper[4792]: I0319 16:56:58.610094 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7"] Mar 19 16:56:59 crc kubenswrapper[4792]: I0319 16:56:59.024463 4792 generic.go:334] "Generic (PLEG): container finished" podID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerID="cbcb5db5556401341a5b8af774a99c56e5c344db9e363fb661ac227bc47145fe" exitCode=0 Mar 19 16:56:59 crc kubenswrapper[4792]: I0319 16:56:59.024543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" event={"ID":"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c","Type":"ContainerDied","Data":"cbcb5db5556401341a5b8af774a99c56e5c344db9e363fb661ac227bc47145fe"} Mar 19 16:56:59 crc kubenswrapper[4792]: I0319 16:56:59.024811 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" event={"ID":"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c","Type":"ContainerStarted","Data":"fcb5368f52c7a9892e28ccbb01310afcbe3a917b087d4b67deb686e28721ff8d"} Mar 19 16:57:01 crc kubenswrapper[4792]: I0319 16:57:01.051308 4792 generic.go:334] "Generic (PLEG): container finished" podID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerID="854c63f5ae11ec3f32a460b0877f5a28445cd15988b30a0c5163f448a15070e7" exitCode=0 Mar 19 16:57:01 crc kubenswrapper[4792]: I0319 16:57:01.051368 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" event={"ID":"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c","Type":"ContainerDied","Data":"854c63f5ae11ec3f32a460b0877f5a28445cd15988b30a0c5163f448a15070e7"} Mar 19 16:57:02 crc kubenswrapper[4792]: I0319 16:57:02.089416 4792 generic.go:334] "Generic (PLEG): container finished" podID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerID="0b1f4623484447cddd47711bc6135ce537453144c4fa39cd779b2b985d51885f" exitCode=0 Mar 19 16:57:02 crc kubenswrapper[4792]: I0319 16:57:02.089474 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" event={"ID":"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c","Type":"ContainerDied","Data":"0b1f4623484447cddd47711bc6135ce537453144c4fa39cd779b2b985d51885f"} Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.353178 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.364645 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx4xv\" (UniqueName: \"kubernetes.io/projected/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-kube-api-access-nx4xv\") pod \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.364696 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-util\") pod \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.364717 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-bundle\") pod \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\" (UID: \"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c\") " Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.366044 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-bundle" (OuterVolumeSpecName: "bundle") pod "448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" (UID: "448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.386729 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-util" (OuterVolumeSpecName: "util") pod "448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" (UID: "448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.421129 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-kube-api-access-nx4xv" (OuterVolumeSpecName: "kube-api-access-nx4xv") pod "448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" (UID: "448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c"). InnerVolumeSpecName "kube-api-access-nx4xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.466050 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx4xv\" (UniqueName: \"kubernetes.io/projected/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-kube-api-access-nx4xv\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.466084 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-util\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:03 crc kubenswrapper[4792]: I0319 16:57:03.466093 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:04 crc kubenswrapper[4792]: I0319 16:57:04.105473 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" event={"ID":"448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c","Type":"ContainerDied","Data":"fcb5368f52c7a9892e28ccbb01310afcbe3a917b087d4b67deb686e28721ff8d"} Mar 19 16:57:04 crc kubenswrapper[4792]: I0319 16:57:04.105777 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb5368f52c7a9892e28ccbb01310afcbe3a917b087d4b67deb686e28721ff8d" Mar 19 16:57:04 crc kubenswrapper[4792]: I0319 16:57:04.105538 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.277267 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx"] Mar 19 16:57:05 crc kubenswrapper[4792]: E0319 16:57:05.277494 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerName="pull" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.277506 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerName="pull" Mar 19 16:57:05 crc kubenswrapper[4792]: E0319 16:57:05.277521 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerName="util" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.277527 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerName="util" Mar 19 16:57:05 crc kubenswrapper[4792]: E0319 16:57:05.277544 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerName="extract" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.277550 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerName="extract" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.277667 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c" containerName="extract" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.278525 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.280274 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.290726 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx"] Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.294498 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvrq\" (UniqueName: \"kubernetes.io/projected/6c058218-adf4-41fb-ad6f-1ad65b5db417-kube-api-access-rdvrq\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.294667 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.294822 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.396196 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.396274 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdvrq\" (UniqueName: \"kubernetes.io/projected/6c058218-adf4-41fb-ad6f-1ad65b5db417-kube-api-access-rdvrq\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.396327 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.397039 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.397213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.420670 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdvrq\" (UniqueName: \"kubernetes.io/projected/6c058218-adf4-41fb-ad6f-1ad65b5db417-kube-api-access-rdvrq\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:05 crc kubenswrapper[4792]: I0319 16:57:05.591479 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:06 crc kubenswrapper[4792]: I0319 16:57:06.061882 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx"] Mar 19 16:57:06 crc kubenswrapper[4792]: W0319 16:57:06.071408 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c058218_adf4_41fb_ad6f_1ad65b5db417.slice/crio-9d6a6532c025f59ba60419bbc5cfc45bcfcfc1de378e5a92f07bc379b2ab2091 WatchSource:0}: Error finding container 9d6a6532c025f59ba60419bbc5cfc45bcfcfc1de378e5a92f07bc379b2ab2091: Status 404 returned error can't find the container with id 9d6a6532c025f59ba60419bbc5cfc45bcfcfc1de378e5a92f07bc379b2ab2091 Mar 19 16:57:06 crc kubenswrapper[4792]: I0319 16:57:06.123274 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" event={"ID":"6c058218-adf4-41fb-ad6f-1ad65b5db417","Type":"ContainerStarted","Data":"9d6a6532c025f59ba60419bbc5cfc45bcfcfc1de378e5a92f07bc379b2ab2091"} Mar 19 16:57:07 crc kubenswrapper[4792]: I0319 16:57:07.134294 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerID="daf30b77998db20d3e9d5a98e35f3dafadba37006bda81154e15a128aa1a03ec" exitCode=0 Mar 19 16:57:07 crc kubenswrapper[4792]: I0319 16:57:07.134398 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" event={"ID":"6c058218-adf4-41fb-ad6f-1ad65b5db417","Type":"ContainerDied","Data":"daf30b77998db20d3e9d5a98e35f3dafadba37006bda81154e15a128aa1a03ec"} Mar 19 16:57:09 crc kubenswrapper[4792]: I0319 16:57:09.153747 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerID="bd6b32144eff2eeae8bb5f62175b2a81ea194eebe0ea250aa846f81e77857aa9" exitCode=0 Mar 19 16:57:09 crc kubenswrapper[4792]: I0319 16:57:09.153865 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" event={"ID":"6c058218-adf4-41fb-ad6f-1ad65b5db417","Type":"ContainerDied","Data":"bd6b32144eff2eeae8bb5f62175b2a81ea194eebe0ea250aa846f81e77857aa9"} Mar 19 16:57:10 crc kubenswrapper[4792]: I0319 16:57:10.161818 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerID="1920d824b75da8bf40238b55c7087bfa7820bafc25595822e0a089c99c6c2602" exitCode=0 Mar 19 16:57:10 crc kubenswrapper[4792]: I0319 16:57:10.161882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" event={"ID":"6c058218-adf4-41fb-ad6f-1ad65b5db417","Type":"ContainerDied","Data":"1920d824b75da8bf40238b55c7087bfa7820bafc25595822e0a089c99c6c2602"} Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.238717 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv"] Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.240090 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.245810 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.245896 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.246264 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.246470 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-4csf4" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.247002 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.256444 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv"] Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.256666 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.408103 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-apiservice-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.408152 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-webhook-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.408174 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1d900a68-83bb-40f6-8841-556f80c6ac78-manager-config\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.408218 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9sl\" (UniqueName: \"kubernetes.io/projected/1d900a68-83bb-40f6-8841-556f80c6ac78-kube-api-access-4k9sl\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.408248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.508942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9sl\" (UniqueName: \"kubernetes.io/projected/1d900a68-83bb-40f6-8841-556f80c6ac78-kube-api-access-4k9sl\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.508996 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.509051 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-apiservice-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.509076 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-webhook-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.509100 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1d900a68-83bb-40f6-8841-556f80c6ac78-manager-config\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.510058 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1d900a68-83bb-40f6-8841-556f80c6ac78-manager-config\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.511077 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.514683 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.515093 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-apiservice-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.531362 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d900a68-83bb-40f6-8841-556f80c6ac78-webhook-cert\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.536733 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9sl\" (UniqueName: \"kubernetes.io/projected/1d900a68-83bb-40f6-8841-556f80c6ac78-kube-api-access-4k9sl\") pod \"loki-operator-controller-manager-795c7b44df-ssttv\" (UID: \"1d900a68-83bb-40f6-8841-556f80c6ac78\") " pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.557311 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.610554 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-bundle\") pod \"6c058218-adf4-41fb-ad6f-1ad65b5db417\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.610665 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdvrq\" (UniqueName: \"kubernetes.io/projected/6c058218-adf4-41fb-ad6f-1ad65b5db417-kube-api-access-rdvrq\") pod \"6c058218-adf4-41fb-ad6f-1ad65b5db417\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.610695 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-util\") pod \"6c058218-adf4-41fb-ad6f-1ad65b5db417\" (UID: \"6c058218-adf4-41fb-ad6f-1ad65b5db417\") " Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.613162 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-bundle" (OuterVolumeSpecName: "bundle") pod "6c058218-adf4-41fb-ad6f-1ad65b5db417" (UID: "6c058218-adf4-41fb-ad6f-1ad65b5db417"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.621975 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c058218-adf4-41fb-ad6f-1ad65b5db417-kube-api-access-rdvrq" (OuterVolumeSpecName: "kube-api-access-rdvrq") pod "6c058218-adf4-41fb-ad6f-1ad65b5db417" (UID: "6c058218-adf4-41fb-ad6f-1ad65b5db417"). InnerVolumeSpecName "kube-api-access-rdvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.647476 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-util" (OuterVolumeSpecName: "util") pod "6c058218-adf4-41fb-ad6f-1ad65b5db417" (UID: "6c058218-adf4-41fb-ad6f-1ad65b5db417"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.711987 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.712044 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdvrq\" (UniqueName: \"kubernetes.io/projected/6c058218-adf4-41fb-ad6f-1ad65b5db417-kube-api-access-rdvrq\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:11 crc kubenswrapper[4792]: I0319 16:57:11.712053 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c058218-adf4-41fb-ad6f-1ad65b5db417-util\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:12 crc kubenswrapper[4792]: I0319 16:57:12.112681 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv"] Mar 19 16:57:12 crc kubenswrapper[4792]: W0319 16:57:12.118136 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d900a68_83bb_40f6_8841_556f80c6ac78.slice/crio-3d4881b22f5be7bad6ab2adbee83d287bd6bc2cb994e57ba3945746bdc70aee3 WatchSource:0}: Error finding container 3d4881b22f5be7bad6ab2adbee83d287bd6bc2cb994e57ba3945746bdc70aee3: Status 404 returned error can't find the container with id 3d4881b22f5be7bad6ab2adbee83d287bd6bc2cb994e57ba3945746bdc70aee3 Mar 19 16:57:12 crc kubenswrapper[4792]: I0319 16:57:12.179044 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" event={"ID":"6c058218-adf4-41fb-ad6f-1ad65b5db417","Type":"ContainerDied","Data":"9d6a6532c025f59ba60419bbc5cfc45bcfcfc1de378e5a92f07bc379b2ab2091"} Mar 19 16:57:12 crc kubenswrapper[4792]: I0319 16:57:12.179067 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx" Mar 19 16:57:12 crc kubenswrapper[4792]: I0319 16:57:12.179081 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d6a6532c025f59ba60419bbc5cfc45bcfcfc1de378e5a92f07bc379b2ab2091" Mar 19 16:57:12 crc kubenswrapper[4792]: I0319 16:57:12.180420 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" event={"ID":"1d900a68-83bb-40f6-8841-556f80c6ac78","Type":"ContainerStarted","Data":"3d4881b22f5be7bad6ab2adbee83d287bd6bc2cb994e57ba3945746bdc70aee3"} Mar 19 16:57:18 crc kubenswrapper[4792]: I0319 16:57:18.213361 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" event={"ID":"1d900a68-83bb-40f6-8841-556f80c6ac78","Type":"ContainerStarted","Data":"b0a25234f51806afd23a80c6002c1cc3e9959bc22eb1ebb831ab139a9088513b"} Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.367344 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh"] Mar 19 16:57:19 crc kubenswrapper[4792]: E0319 16:57:19.367935 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerName="extract" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.367948 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerName="extract" Mar 19 16:57:19 crc kubenswrapper[4792]: E0319 16:57:19.367969 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerName="util" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.367975 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerName="util" Mar 19 16:57:19 crc kubenswrapper[4792]: E0319 16:57:19.367988 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerName="pull" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.367993 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerName="pull" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.368093 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c058218-adf4-41fb-ad6f-1ad65b5db417" containerName="extract" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.368546 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.371382 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-2lv5l" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.371394 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.371404 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.395500 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh"] Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.524786 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6zl\" (UniqueName: \"kubernetes.io/projected/356f8438-fd17-4eed-8b43-92331b3a006c-kube-api-access-rx6zl\") pod \"cluster-logging-operator-66689c4bbf-jdjgh\" (UID: \"356f8438-fd17-4eed-8b43-92331b3a006c\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.626620 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx6zl\" (UniqueName: \"kubernetes.io/projected/356f8438-fd17-4eed-8b43-92331b3a006c-kube-api-access-rx6zl\") pod \"cluster-logging-operator-66689c4bbf-jdjgh\" (UID: \"356f8438-fd17-4eed-8b43-92331b3a006c\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.647038 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx6zl\" (UniqueName: \"kubernetes.io/projected/356f8438-fd17-4eed-8b43-92331b3a006c-kube-api-access-rx6zl\") pod \"cluster-logging-operator-66689c4bbf-jdjgh\" (UID: \"356f8438-fd17-4eed-8b43-92331b3a006c\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.683597 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh" Mar 19 16:57:19 crc kubenswrapper[4792]: I0319 16:57:19.896318 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh"] Mar 19 16:57:20 crc kubenswrapper[4792]: I0319 16:57:20.228182 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh" event={"ID":"356f8438-fd17-4eed-8b43-92331b3a006c","Type":"ContainerStarted","Data":"44b0402f832bbf33f5890223dd9cfeee2a637f1afa2c84341e9e76046302bb71"} Mar 19 16:57:20 crc kubenswrapper[4792]: I0319 16:57:20.231174 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:57:20 crc kubenswrapper[4792]: I0319 16:57:20.231215 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:57:27 crc kubenswrapper[4792]: I0319 16:57:27.270170 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh" event={"ID":"356f8438-fd17-4eed-8b43-92331b3a006c","Type":"ContainerStarted","Data":"3a6221b852f91dba818e3e935f398d531bb6853016c46af4782cd23ea82360ec"} Mar 19 16:57:27 crc kubenswrapper[4792]: I0319 16:57:27.272205 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" event={"ID":"1d900a68-83bb-40f6-8841-556f80c6ac78","Type":"ContainerStarted","Data":"4dc3c3e12dd7239e59a309ef765a7fdb9af0e16c5436e2139df89d56da748c04"} Mar 19 16:57:27 crc kubenswrapper[4792]: I0319 16:57:27.273738 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:27 crc kubenswrapper[4792]: I0319 16:57:27.274655 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 16:57:27 crc kubenswrapper[4792]: I0319 16:57:27.322471 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-66689c4bbf-jdjgh" podStartSLOduration=1.306979053 podStartE2EDuration="8.322448517s" podCreationTimestamp="2026-03-19 16:57:19 +0000 UTC" firstStartedPulling="2026-03-19 16:57:19.902647199 +0000 UTC m=+1003.048704739" lastFinishedPulling="2026-03-19 16:57:26.918116653 +0000 UTC m=+1010.064174203" observedRunningTime="2026-03-19 16:57:27.289177889 +0000 UTC m=+1010.435235429" watchObservedRunningTime="2026-03-19 16:57:27.322448517 +0000 UTC m=+1010.468506057" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.673590 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" podStartSLOduration=6.876192421 podStartE2EDuration="21.673572933s" podCreationTimestamp="2026-03-19 16:57:11 +0000 UTC" firstStartedPulling="2026-03-19 16:57:12.119606241 +0000 UTC m=+995.265663781" lastFinishedPulling="2026-03-19 16:57:26.916986753 +0000 UTC m=+1010.063044293" observedRunningTime="2026-03-19 16:57:27.328311412 +0000 UTC m=+1010.474368962" watchObservedRunningTime="2026-03-19 16:57:32.673572933 +0000 UTC m=+1015.819630473" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.676308 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.677162 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.679808 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.680077 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.691627 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.853879 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-299da965-13ee-4eaf-b089-f6c937a513b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-299da965-13ee-4eaf-b089-f6c937a513b7\") pod \"minio\" (UID: \"afe18ad9-4239-4427-9f2f-15bceb92f41a\") " pod="minio-dev/minio" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.853929 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg2km\" (UniqueName: \"kubernetes.io/projected/afe18ad9-4239-4427-9f2f-15bceb92f41a-kube-api-access-qg2km\") pod \"minio\" (UID: \"afe18ad9-4239-4427-9f2f-15bceb92f41a\") " pod="minio-dev/minio" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.955329 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-299da965-13ee-4eaf-b089-f6c937a513b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-299da965-13ee-4eaf-b089-f6c937a513b7\") pod \"minio\" (UID: \"afe18ad9-4239-4427-9f2f-15bceb92f41a\") " pod="minio-dev/minio" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.955388 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg2km\" (UniqueName: \"kubernetes.io/projected/afe18ad9-4239-4427-9f2f-15bceb92f41a-kube-api-access-qg2km\") pod \"minio\" (UID: \"afe18ad9-4239-4427-9f2f-15bceb92f41a\") " pod="minio-dev/minio" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.958987 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.959036 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-299da965-13ee-4eaf-b089-f6c937a513b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-299da965-13ee-4eaf-b089-f6c937a513b7\") pod \"minio\" (UID: \"afe18ad9-4239-4427-9f2f-15bceb92f41a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1063d29f553dcb25c2ce2405f6273bcab08d244c55df826a9ecb20fd7c342801/globalmount\"" pod="minio-dev/minio" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.983401 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg2km\" (UniqueName: \"kubernetes.io/projected/afe18ad9-4239-4427-9f2f-15bceb92f41a-kube-api-access-qg2km\") pod \"minio\" (UID: \"afe18ad9-4239-4427-9f2f-15bceb92f41a\") " pod="minio-dev/minio" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.989006 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-299da965-13ee-4eaf-b089-f6c937a513b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-299da965-13ee-4eaf-b089-f6c937a513b7\") pod \"minio\" (UID: \"afe18ad9-4239-4427-9f2f-15bceb92f41a\") " pod="minio-dev/minio" Mar 19 16:57:32 crc kubenswrapper[4792]: I0319 16:57:32.994720 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 19 16:57:33 crc kubenswrapper[4792]: I0319 16:57:33.476216 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 19 16:57:34 crc kubenswrapper[4792]: I0319 16:57:34.329392 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"afe18ad9-4239-4427-9f2f-15bceb92f41a","Type":"ContainerStarted","Data":"c012c24f83a15be09d3eb6f824d7651a21dbc2254af7a1b83a39190679346aee"} Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.698450 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r7jbv"] Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.701311 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.704054 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7jbv"] Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.800789 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-catalog-content\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.800853 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2zh\" (UniqueName: \"kubernetes.io/projected/ce2e3a91-242c-4321-8422-18c803a88921-kube-api-access-9t2zh\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.800946 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-utilities\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.902430 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-utilities\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.903563 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-utilities\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.905192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-catalog-content\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.905494 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-catalog-content\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.905569 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t2zh\" (UniqueName: \"kubernetes.io/projected/ce2e3a91-242c-4321-8422-18c803a88921-kube-api-access-9t2zh\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:35 crc kubenswrapper[4792]: I0319 16:57:35.930817 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t2zh\" (UniqueName: \"kubernetes.io/projected/ce2e3a91-242c-4321-8422-18c803a88921-kube-api-access-9t2zh\") pod \"redhat-marketplace-r7jbv\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:36 crc kubenswrapper[4792]: I0319 16:57:36.028520 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:37 crc kubenswrapper[4792]: I0319 16:57:37.496582 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7jbv"] Mar 19 16:57:37 crc kubenswrapper[4792]: W0319 16:57:37.500480 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce2e3a91_242c_4321_8422_18c803a88921.slice/crio-9821d92e414437da57ba619de02688669f53a315e7ad09c77cecaaf899ecc7a3 WatchSource:0}: Error finding container 9821d92e414437da57ba619de02688669f53a315e7ad09c77cecaaf899ecc7a3: Status 404 returned error can't find the container with id 9821d92e414437da57ba619de02688669f53a315e7ad09c77cecaaf899ecc7a3 Mar 19 16:57:38 crc kubenswrapper[4792]: I0319 16:57:38.368125 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce2e3a91-242c-4321-8422-18c803a88921" containerID="65e912f9bd3f15f290efebfe890c826ef69bf301d52f9a732435fca41f5e6e84" exitCode=0 Mar 19 16:57:38 crc kubenswrapper[4792]: I0319 16:57:38.368198 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7jbv" event={"ID":"ce2e3a91-242c-4321-8422-18c803a88921","Type":"ContainerDied","Data":"65e912f9bd3f15f290efebfe890c826ef69bf301d52f9a732435fca41f5e6e84"} Mar 19 16:57:38 crc kubenswrapper[4792]: I0319 16:57:38.368900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7jbv" event={"ID":"ce2e3a91-242c-4321-8422-18c803a88921","Type":"ContainerStarted","Data":"9821d92e414437da57ba619de02688669f53a315e7ad09c77cecaaf899ecc7a3"} Mar 19 16:57:38 crc kubenswrapper[4792]: I0319 16:57:38.370926 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"afe18ad9-4239-4427-9f2f-15bceb92f41a","Type":"ContainerStarted","Data":"ee0ae3e5fd42fffd9a8a4eb61f6cf001c96aaea06539d114c6b53379593813e2"} Mar 19 16:57:38 crc kubenswrapper[4792]: I0319 16:57:38.400565 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.64352169 podStartE2EDuration="9.400538582s" podCreationTimestamp="2026-03-19 16:57:29 +0000 UTC" firstStartedPulling="2026-03-19 16:57:33.48065178 +0000 UTC m=+1016.626709310" lastFinishedPulling="2026-03-19 16:57:37.237668662 +0000 UTC m=+1020.383726202" observedRunningTime="2026-03-19 16:57:38.396611818 +0000 UTC m=+1021.542669358" watchObservedRunningTime="2026-03-19 16:57:38.400538582 +0000 UTC m=+1021.546596142" Mar 19 16:57:40 crc kubenswrapper[4792]: I0319 16:57:40.383910 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce2e3a91-242c-4321-8422-18c803a88921" containerID="498d33fdb4f5083c577812246dfc71706be44849d36decbac5e8e26bc94ec70b" exitCode=0 Mar 19 16:57:40 crc kubenswrapper[4792]: I0319 16:57:40.383947 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7jbv" event={"ID":"ce2e3a91-242c-4321-8422-18c803a88921","Type":"ContainerDied","Data":"498d33fdb4f5083c577812246dfc71706be44849d36decbac5e8e26bc94ec70b"} Mar 19 16:57:41 crc kubenswrapper[4792]: I0319 16:57:41.391228 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7jbv" event={"ID":"ce2e3a91-242c-4321-8422-18c803a88921","Type":"ContainerStarted","Data":"bb9dd8fff752990ea824b2f2e753ee3cc04bf69613c4a1f843ecc94ac087b8ad"} Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.678678 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r7jbv" podStartSLOduration=7.168540374 podStartE2EDuration="9.678637809s" podCreationTimestamp="2026-03-19 16:57:35 +0000 UTC" firstStartedPulling="2026-03-19 16:57:38.370204251 +0000 UTC m=+1021.516261791" lastFinishedPulling="2026-03-19 16:57:40.880301686 +0000 UTC m=+1024.026359226" observedRunningTime="2026-03-19 16:57:41.42707356 +0000 UTC m=+1024.573131110" watchObservedRunningTime="2026-03-19 16:57:44.678637809 +0000 UTC m=+1027.824695359" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.685451 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-lmw24"] Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.686929 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.691013 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.691379 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.691606 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.692032 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-f44n9" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.693771 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.699507 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-lmw24"] Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.821673 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.821715 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmz7\" (UniqueName: \"kubernetes.io/projected/54c15722-d849-4290-bf53-39c4383912e4-kube-api-access-bpmz7\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.821974 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.822052 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54c15722-d849-4290-bf53-39c4383912e4-config\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.822323 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.854558 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58"] Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.855644 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.858313 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.858964 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.859194 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.879448 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58"] Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.923953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.924001 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54c15722-d849-4290-bf53-39c4383912e4-config\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.924043 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.924073 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.924092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmz7\" (UniqueName: \"kubernetes.io/projected/54c15722-d849-4290-bf53-39c4383912e4-kube-api-access-bpmz7\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.926051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54c15722-d849-4290-bf53-39c4383912e4-config\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.926157 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.930173 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.937162 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/54c15722-d849-4290-bf53-39c4383912e4-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.940688 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6"] Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.941652 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.943674 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.943885 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.963554 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmz7\" (UniqueName: \"kubernetes.io/projected/54c15722-d849-4290-bf53-39c4383912e4-kube-api-access-bpmz7\") pod \"logging-loki-distributor-9c6b6d984-lmw24\" (UID: \"54c15722-d849-4290-bf53-39c4383912e4\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:44 crc kubenswrapper[4792]: I0319 16:57:44.968645 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.009296 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027022 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027071 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027106 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027148 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b39436-d594-47d8-9e75-8470495398ac-config\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027165 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d0f2d0-18de-48b9-ba57-85e09753dccf-config\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027180 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027198 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027213 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027230 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hcs\" (UniqueName: \"kubernetes.io/projected/78b39436-d594-47d8-9e75-8470495398ac-kube-api-access-78hcs\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027262 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.027287 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8kz\" (UniqueName: \"kubernetes.io/projected/03d0f2d0-18de-48b9-ba57-85e09753dccf-kube-api-access-8b8kz\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.075590 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.077265 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.096295 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.096389 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.096297 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.096512 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.096586 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.104892 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.125686 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.127080 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.128660 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b39436-d594-47d8-9e75-8470495398ac-config\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.128698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d0f2d0-18de-48b9-ba57-85e09753dccf-config\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.128726 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.129729 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d0f2d0-18de-48b9-ba57-85e09753dccf-config\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.129767 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b39436-d594-47d8-9e75-8470495398ac-config\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.128755 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.136497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.136544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78hcs\" (UniqueName: \"kubernetes.io/projected/78b39436-d594-47d8-9e75-8470495398ac-kube-api-access-78hcs\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.136631 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.136692 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b8kz\" (UniqueName: \"kubernetes.io/projected/03d0f2d0-18de-48b9-ba57-85e09753dccf-kube-api-access-8b8kz\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.137591 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.137664 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.137710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.137786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.139226 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-jsnjh" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.147807 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.149463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.152746 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.158489 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/03d0f2d0-18de-48b9-ba57-85e09753dccf-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.161021 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.165894 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/78b39436-d594-47d8-9e75-8470495398ac-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.179296 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b8kz\" (UniqueName: \"kubernetes.io/projected/03d0f2d0-18de-48b9-ba57-85e09753dccf-kube-api-access-8b8kz\") pod \"logging-loki-query-frontend-ff66c4dc9-z95d6\" (UID: \"03d0f2d0-18de-48b9-ba57-85e09753dccf\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.182669 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78hcs\" (UniqueName: \"kubernetes.io/projected/78b39436-d594-47d8-9e75-8470495398ac-kube-api-access-78hcs\") pod \"logging-loki-querier-6dcbdf8bb8-ljg58\" (UID: \"78b39436-d594-47d8-9e75-8470495398ac\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.209417 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.238824 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.238897 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.238939 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-tls-secret\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.238961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-lokistack-gateway\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-tls-secret\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239063 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-rbac\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239101 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-tenants\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239126 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr48g\" (UniqueName: \"kubernetes.io/projected/10c782de-230d-407d-9bb1-2a8a3a8da91c-kube-api-access-dr48g\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239149 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-tenants\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-rbac\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239222 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239258 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239274 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmgj8\" (UniqueName: \"kubernetes.io/projected/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-kube-api-access-mmgj8\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239293 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239308 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.239354 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-lokistack-gateway\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.320871 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340130 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-lokistack-gateway\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340171 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340220 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340243 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-tls-secret\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340260 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-lokistack-gateway\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340287 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-tls-secret\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340325 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-rbac\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340343 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-tenants\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340363 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr48g\" (UniqueName: \"kubernetes.io/projected/10c782de-230d-407d-9bb1-2a8a3a8da91c-kube-api-access-dr48g\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340386 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-tenants\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340414 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-rbac\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340435 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340453 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340468 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340485 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmgj8\" (UniqueName: \"kubernetes.io/projected/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-kube-api-access-mmgj8\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.340501 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.341284 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.342074 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-lokistack-gateway\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.344778 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.345997 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-tenants\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.346184 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.346547 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.346794 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.346824 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-tenants\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.346802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-rbac\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.347615 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-tls-secret\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.349915 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.354504 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-lokistack-gateway\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.354950 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/10c782de-230d-407d-9bb1-2a8a3a8da91c-tls-secret\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.355933 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/10c782de-230d-407d-9bb1-2a8a3a8da91c-rbac\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.358923 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr48g\" (UniqueName: \"kubernetes.io/projected/10c782de-230d-407d-9bb1-2a8a3a8da91c-kube-api-access-dr48g\") pod \"logging-loki-gateway-5bc6c599cb-2gcbl\" (UID: \"10c782de-230d-407d-9bb1-2a8a3a8da91c\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.361286 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmgj8\" (UniqueName: \"kubernetes.io/projected/1e5dbe4d-6818-4b0d-a372-b9574882f2ad-kube-api-access-mmgj8\") pod \"logging-loki-gateway-5bc6c599cb-vz8rf\" (UID: \"1e5dbe4d-6818-4b0d-a372-b9574882f2ad\") " pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.432174 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.458894 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-lmw24"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.472319 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.517173 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.837946 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.838777 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.846320 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.846963 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.847490 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.855447 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.890447 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.891729 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.897148 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.897397 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.898624 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.924352 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.956261 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.956325 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.956367 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cx8q\" (UniqueName: \"kubernetes.io/projected/b90cdc46-8fb4-424e-be18-e675309acdff-kube-api-access-9cx8q\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.956395 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b90cdc46-8fb4-424e-be18-e675309acdff-config\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.956498 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b8fac3b-73e1-476b-b2f7-6cc2585a3d7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b8fac3b-73e1-476b-b2f7-6cc2585a3d7b\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.956524 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.956557 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d0b50390-088a-4531-811f-7dd3ae7c4df3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0b50390-088a-4531-811f-7dd3ae7c4df3\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.956587 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.975170 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58"] Mar 19 16:57:45 crc kubenswrapper[4792]: W0319 16:57:45.979887 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78b39436_d594_47d8_9e75_8470495398ac.slice/crio-f3ef6c8fd9de388d37810cf906cecbfc5503691852980df6f6c724daf10e71d2 WatchSource:0}: Error finding container f3ef6c8fd9de388d37810cf906cecbfc5503691852980df6f6c724daf10e71d2: Status 404 returned error can't find the container with id f3ef6c8fd9de388d37810cf906cecbfc5503691852980df6f6c724daf10e71d2 Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.982679 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.983994 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.992545 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Mar 19 16:57:45 crc kubenswrapper[4792]: I0319 16:57:45.993985 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.002461 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.029647 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.029728 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.032568 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl"] Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058030 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a19384e-dd71-4033-8cc0-90925ec48a23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a19384e-dd71-4033-8cc0-90925ec48a23\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058078 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058152 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b8fac3b-73e1-476b-b2f7-6cc2585a3d7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b8fac3b-73e1-476b-b2f7-6cc2585a3d7b\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058243 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058856 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058886 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-790d93c2-e6b1-46fa-942b-76c3b0f2e988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-790d93c2-e6b1-46fa-942b-76c3b0f2e988\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058912 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058929 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.058949 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059064 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d0b50390-088a-4531-811f-7dd3ae7c4df3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0b50390-088a-4531-811f-7dd3ae7c4df3\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059177 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312a9ea1-8c2b-4b68-a4c2-55869981692e-config\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059203 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059232 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rz46\" (UniqueName: \"kubernetes.io/projected/312a9ea1-8c2b-4b68-a4c2-55869981692e-kube-api-access-7rz46\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059267 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qjth\" (UniqueName: \"kubernetes.io/projected/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-kube-api-access-2qjth\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059328 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cx8q\" (UniqueName: \"kubernetes.io/projected/b90cdc46-8fb4-424e-be18-e675309acdff-kube-api-access-9cx8q\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b90cdc46-8fb4-424e-be18-e675309acdff-config\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059374 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059389 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.059416 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.060531 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.060716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b90cdc46-8fb4-424e-be18-e675309acdff-config\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.063160 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.063201 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b8fac3b-73e1-476b-b2f7-6cc2585a3d7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b8fac3b-73e1-476b-b2f7-6cc2585a3d7b\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8e0ebb842ad3fb85a4081b033c88093810bf76fc0aaf8020576df8fafa213962/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.063410 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.063435 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d0b50390-088a-4531-811f-7dd3ae7c4df3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0b50390-088a-4531-811f-7dd3ae7c4df3\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/954e3d7178660034c89282772759db6aaee7785953f6e6f2a59fb29e4a0bbde9/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.065102 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.065617 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.067288 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b90cdc46-8fb4-424e-be18-e675309acdff-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.076604 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cx8q\" (UniqueName: \"kubernetes.io/projected/b90cdc46-8fb4-424e-be18-e675309acdff-kube-api-access-9cx8q\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.077730 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.090827 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d0b50390-088a-4531-811f-7dd3ae7c4df3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0b50390-088a-4531-811f-7dd3ae7c4df3\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.107659 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b8fac3b-73e1-476b-b2f7-6cc2585a3d7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b8fac3b-73e1-476b-b2f7-6cc2585a3d7b\") pod \"logging-loki-ingester-0\" (UID: \"b90cdc46-8fb4-424e-be18-e675309acdff\") " pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.160761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.160856 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312a9ea1-8c2b-4b68-a4c2-55869981692e-config\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.160891 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rz46\" (UniqueName: \"kubernetes.io/projected/312a9ea1-8c2b-4b68-a4c2-55869981692e-kube-api-access-7rz46\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.160922 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.160958 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qjth\" (UniqueName: \"kubernetes.io/projected/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-kube-api-access-2qjth\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.160990 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.161015 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.161042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.161068 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a19384e-dd71-4033-8cc0-90925ec48a23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a19384e-dd71-4033-8cc0-90925ec48a23\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.161240 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.161296 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-790d93c2-e6b1-46fa-942b-76c3b0f2e988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-790d93c2-e6b1-46fa-942b-76c3b0f2e988\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.162113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312a9ea1-8c2b-4b68-a4c2-55869981692e-config\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.162141 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.162195 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.162209 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.162264 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.162225 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.165741 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.165771 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.165813 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.173908 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/312a9ea1-8c2b-4b68-a4c2-55869981692e-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.202868 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.204507 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.204943 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a19384e-dd71-4033-8cc0-90925ec48a23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a19384e-dd71-4033-8cc0-90925ec48a23\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9fefda0e999f6d500065de0147bc7968a251e4a431792de38f2bf822fb2d8378/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.204585 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.205053 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-790d93c2-e6b1-46fa-942b-76c3b0f2e988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-790d93c2-e6b1-46fa-942b-76c3b0f2e988\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/51a6efdf7b250adeb52f0dfba695b6c69429dc87b713c2f2fcf42131be39e57a/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.206248 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.207529 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qjth\" (UniqueName: \"kubernetes.io/projected/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-kube-api-access-2qjth\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.207981 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rz46\" (UniqueName: \"kubernetes.io/projected/312a9ea1-8c2b-4b68-a4c2-55869981692e-kube-api-access-7rz46\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.208678 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.216372 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d42fa7f9-ea92-480c-8de6-cf0b6b9219e6-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.237489 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a19384e-dd71-4033-8cc0-90925ec48a23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a19384e-dd71-4033-8cc0-90925ec48a23\") pod \"logging-loki-compactor-0\" (UID: \"312a9ea1-8c2b-4b68-a4c2-55869981692e\") " pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.238788 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-790d93c2-e6b1-46fa-942b-76c3b0f2e988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-790d93c2-e6b1-46fa-942b-76c3b0f2e988\") pod \"logging-loki-index-gateway-0\" (UID: \"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.320991 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.425310 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" event={"ID":"1e5dbe4d-6818-4b0d-a372-b9574882f2ad","Type":"ContainerStarted","Data":"3b99df4d1d2d4f68283f93001ae66489676317d5ab9402103e180d80860998e1"} Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.427468 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" event={"ID":"78b39436-d594-47d8-9e75-8470495398ac","Type":"ContainerStarted","Data":"f3ef6c8fd9de388d37810cf906cecbfc5503691852980df6f6c724daf10e71d2"} Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.428607 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" event={"ID":"10c782de-230d-407d-9bb1-2a8a3a8da91c","Type":"ContainerStarted","Data":"fe7471331305698feb9276e36e1b54c3e3ce1cee119d89b6825240a638ec41e4"} Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.433073 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" event={"ID":"03d0f2d0-18de-48b9-ba57-85e09753dccf","Type":"ContainerStarted","Data":"8a073d3448b2ce1aba31da3cf2525bf2d94d6156b8f38a2fbf5108d534e7c6f9"} Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.435774 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" event={"ID":"54c15722-d849-4290-bf53-39c4383912e4","Type":"ContainerStarted","Data":"21160b9122a3bd512951ed655386e03cd789abb01586643cfc53daba4621f924"} Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.495216 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.512261 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.549056 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7jbv"] Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.561361 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 19 16:57:46 crc kubenswrapper[4792]: W0319 16:57:46.563968 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd42fa7f9_ea92_480c_8de6_cf0b6b9219e6.slice/crio-397c11b48e003b54ad41e587811c701f9611c152faeabd2f58881e2f689918dc WatchSource:0}: Error finding container 397c11b48e003b54ad41e587811c701f9611c152faeabd2f58881e2f689918dc: Status 404 returned error can't find the container with id 397c11b48e003b54ad41e587811c701f9611c152faeabd2f58881e2f689918dc Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.627870 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 19 16:57:46 crc kubenswrapper[4792]: W0319 16:57:46.638867 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb90cdc46_8fb4_424e_be18_e675309acdff.slice/crio-325d63d53a45b5b13463c08287cd024f2345c70be89ecf832d225c62b6818339 WatchSource:0}: Error finding container 325d63d53a45b5b13463c08287cd024f2345c70be89ecf832d225c62b6818339: Status 404 returned error can't find the container with id 325d63d53a45b5b13463c08287cd024f2345c70be89ecf832d225c62b6818339 Mar 19 16:57:46 crc kubenswrapper[4792]: I0319 16:57:46.747381 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 19 16:57:46 crc kubenswrapper[4792]: W0319 16:57:46.750116 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312a9ea1_8c2b_4b68_a4c2_55869981692e.slice/crio-a839df3df6a3fc7fd2425e8329e2de1dfb24380bfbb174cda0ba5817963ace4f WatchSource:0}: Error finding container a839df3df6a3fc7fd2425e8329e2de1dfb24380bfbb174cda0ba5817963ace4f: Status 404 returned error can't find the container with id a839df3df6a3fc7fd2425e8329e2de1dfb24380bfbb174cda0ba5817963ace4f Mar 19 16:57:47 crc kubenswrapper[4792]: I0319 16:57:47.446892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"b90cdc46-8fb4-424e-be18-e675309acdff","Type":"ContainerStarted","Data":"325d63d53a45b5b13463c08287cd024f2345c70be89ecf832d225c62b6818339"} Mar 19 16:57:47 crc kubenswrapper[4792]: I0319 16:57:47.448936 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"312a9ea1-8c2b-4b68-a4c2-55869981692e","Type":"ContainerStarted","Data":"a839df3df6a3fc7fd2425e8329e2de1dfb24380bfbb174cda0ba5817963ace4f"} Mar 19 16:57:47 crc kubenswrapper[4792]: I0319 16:57:47.450952 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6","Type":"ContainerStarted","Data":"397c11b48e003b54ad41e587811c701f9611c152faeabd2f58881e2f689918dc"} Mar 19 16:57:48 crc kubenswrapper[4792]: I0319 16:57:48.457174 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r7jbv" podUID="ce2e3a91-242c-4321-8422-18c803a88921" containerName="registry-server" containerID="cri-o://bb9dd8fff752990ea824b2f2e753ee3cc04bf69613c4a1f843ecc94ac087b8ad" gracePeriod=2 Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.466621 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce2e3a91-242c-4321-8422-18c803a88921" containerID="bb9dd8fff752990ea824b2f2e753ee3cc04bf69613c4a1f843ecc94ac087b8ad" exitCode=0 Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.466727 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7jbv" event={"ID":"ce2e3a91-242c-4321-8422-18c803a88921","Type":"ContainerDied","Data":"bb9dd8fff752990ea824b2f2e753ee3cc04bf69613c4a1f843ecc94ac087b8ad"} Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.832143 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.962760 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t2zh\" (UniqueName: \"kubernetes.io/projected/ce2e3a91-242c-4321-8422-18c803a88921-kube-api-access-9t2zh\") pod \"ce2e3a91-242c-4321-8422-18c803a88921\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.962813 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-utilities\") pod \"ce2e3a91-242c-4321-8422-18c803a88921\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.963003 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-catalog-content\") pod \"ce2e3a91-242c-4321-8422-18c803a88921\" (UID: \"ce2e3a91-242c-4321-8422-18c803a88921\") " Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.963709 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-utilities" (OuterVolumeSpecName: "utilities") pod "ce2e3a91-242c-4321-8422-18c803a88921" (UID: "ce2e3a91-242c-4321-8422-18c803a88921"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.973228 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2e3a91-242c-4321-8422-18c803a88921-kube-api-access-9t2zh" (OuterVolumeSpecName: "kube-api-access-9t2zh") pod "ce2e3a91-242c-4321-8422-18c803a88921" (UID: "ce2e3a91-242c-4321-8422-18c803a88921"). InnerVolumeSpecName "kube-api-access-9t2zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:57:49 crc kubenswrapper[4792]: I0319 16:57:49.989163 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce2e3a91-242c-4321-8422-18c803a88921" (UID: "ce2e3a91-242c-4321-8422-18c803a88921"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.064589 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.064965 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t2zh\" (UniqueName: \"kubernetes.io/projected/ce2e3a91-242c-4321-8422-18c803a88921-kube-api-access-9t2zh\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.064980 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2e3a91-242c-4321-8422-18c803a88921-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.230372 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.230419 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.230459 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.231025 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cccedd4b3574c81c38a56f329e598dc97a6d03867a548dcb7438ac401ae1edcb"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.231089 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://cccedd4b3574c81c38a56f329e598dc97a6d03867a548dcb7438ac401ae1edcb" gracePeriod=600 Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.476575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7jbv" event={"ID":"ce2e3a91-242c-4321-8422-18c803a88921","Type":"ContainerDied","Data":"9821d92e414437da57ba619de02688669f53a315e7ad09c77cecaaf899ecc7a3"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.476659 4792 scope.go:117] "RemoveContainer" containerID="bb9dd8fff752990ea824b2f2e753ee3cc04bf69613c4a1f843ecc94ac087b8ad" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.476794 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7jbv" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.488563 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" event={"ID":"10c782de-230d-407d-9bb1-2a8a3a8da91c","Type":"ContainerStarted","Data":"1781b0407608dc7d5a688158525a9df08138fcc14f51edb2c9632390500cdae7"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.491407 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" event={"ID":"54c15722-d849-4290-bf53-39c4383912e4","Type":"ContainerStarted","Data":"c5cf90c8433526af40b6acd416ba5e5b30de66a681827ba3ccdb86699deef138"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.491687 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.494220 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"d42fa7f9-ea92-480c-8de6-cf0b6b9219e6","Type":"ContainerStarted","Data":"1f844fd7f837f1fc9dc904bf77edff354377a3a187f9b916b51214e60864bbc5"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.494725 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.497627 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"312a9ea1-8c2b-4b68-a4c2-55869981692e","Type":"ContainerStarted","Data":"b25324b14c03f768aef76699053a9630fb36d1c8cb2fdc3818eb6be264c0edbc"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.498170 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.501611 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" event={"ID":"1e5dbe4d-6818-4b0d-a372-b9574882f2ad","Type":"ContainerStarted","Data":"4919ff9e893de48534b259f09a41ac35ccf40a329f380c01c6a74ef9aafc2e81"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.504310 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" event={"ID":"78b39436-d594-47d8-9e75-8470495398ac","Type":"ContainerStarted","Data":"d36210f814fbf83453752050ba80834c789cecead8792b47c6ead2d0c0bbaf2f"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.505082 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.506188 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" event={"ID":"03d0f2d0-18de-48b9-ba57-85e09753dccf","Type":"ContainerStarted","Data":"3072aaf09789d03da0c19f8eba1ca2b39a3c2c3b74361f8eb8ca604265c10541"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.506572 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.519540 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="cccedd4b3574c81c38a56f329e598dc97a6d03867a548dcb7438ac401ae1edcb" exitCode=0 Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.519608 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"cccedd4b3574c81c38a56f329e598dc97a6d03867a548dcb7438ac401ae1edcb"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.520557 4792 scope.go:117] "RemoveContainer" containerID="498d33fdb4f5083c577812246dfc71706be44849d36decbac5e8e26bc94ec70b" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.522967 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"b90cdc46-8fb4-424e-be18-e675309acdff","Type":"ContainerStarted","Data":"eb90c8a48cbd59391d704313f3ebb108c1bf49742e9a0cf7a507b04f61a87080"} Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.523265 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.541163 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" podStartSLOduration=2.508943608 podStartE2EDuration="6.541137705s" podCreationTimestamp="2026-03-19 16:57:44 +0000 UTC" firstStartedPulling="2026-03-19 16:57:45.493175792 +0000 UTC m=+1028.639233342" lastFinishedPulling="2026-03-19 16:57:49.525369899 +0000 UTC m=+1032.671427439" observedRunningTime="2026-03-19 16:57:50.516613118 +0000 UTC m=+1033.662670668" watchObservedRunningTime="2026-03-19 16:57:50.541137705 +0000 UTC m=+1033.687195235" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.547797 4792 scope.go:117] "RemoveContainer" containerID="65e912f9bd3f15f290efebfe890c826ef69bf301d52f9a732435fca41f5e6e84" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.568818 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" podStartSLOduration=2.914429712 podStartE2EDuration="6.568798965s" podCreationTimestamp="2026-03-19 16:57:44 +0000 UTC" firstStartedPulling="2026-03-19 16:57:45.982947142 +0000 UTC m=+1029.129004692" lastFinishedPulling="2026-03-19 16:57:49.637316405 +0000 UTC m=+1032.783373945" observedRunningTime="2026-03-19 16:57:50.542141131 +0000 UTC m=+1033.688198671" watchObservedRunningTime="2026-03-19 16:57:50.568798965 +0000 UTC m=+1033.714856505" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.570688 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" podStartSLOduration=2.794296771 podStartE2EDuration="6.570680225s" podCreationTimestamp="2026-03-19 16:57:44 +0000 UTC" firstStartedPulling="2026-03-19 16:57:45.85521178 +0000 UTC m=+1029.001269320" lastFinishedPulling="2026-03-19 16:57:49.631595234 +0000 UTC m=+1032.777652774" observedRunningTime="2026-03-19 16:57:50.563753742 +0000 UTC m=+1033.709811302" watchObservedRunningTime="2026-03-19 16:57:50.570680225 +0000 UTC m=+1033.716737765" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.590323 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.542219955 podStartE2EDuration="6.590304783s" podCreationTimestamp="2026-03-19 16:57:44 +0000 UTC" firstStartedPulling="2026-03-19 16:57:46.566658401 +0000 UTC m=+1029.712715941" lastFinishedPulling="2026-03-19 16:57:49.614743229 +0000 UTC m=+1032.760800769" observedRunningTime="2026-03-19 16:57:50.586534114 +0000 UTC m=+1033.732591654" watchObservedRunningTime="2026-03-19 16:57:50.590304783 +0000 UTC m=+1033.736362323" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.592096 4792 scope.go:117] "RemoveContainer" containerID="8fec416c9bc9f932f648a25ada539f17bfee109f18ef5d78432b6c269a1dd821" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.616867 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7jbv"] Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.621829 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7jbv"] Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.656729 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.667146024 podStartE2EDuration="6.656710306s" podCreationTimestamp="2026-03-19 16:57:44 +0000 UTC" firstStartedPulling="2026-03-19 16:57:46.640961253 +0000 UTC m=+1029.787018793" lastFinishedPulling="2026-03-19 16:57:49.630525535 +0000 UTC m=+1032.776583075" observedRunningTime="2026-03-19 16:57:50.652095394 +0000 UTC m=+1033.798152934" watchObservedRunningTime="2026-03-19 16:57:50.656710306 +0000 UTC m=+1033.802767846" Mar 19 16:57:50 crc kubenswrapper[4792]: I0319 16:57:50.659526 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.796342284 podStartE2EDuration="6.65951437s" podCreationTimestamp="2026-03-19 16:57:44 +0000 UTC" firstStartedPulling="2026-03-19 16:57:46.752483807 +0000 UTC m=+1029.898541347" lastFinishedPulling="2026-03-19 16:57:49.615655883 +0000 UTC m=+1032.761713433" observedRunningTime="2026-03-19 16:57:50.634230353 +0000 UTC m=+1033.780287903" watchObservedRunningTime="2026-03-19 16:57:50.65951437 +0000 UTC m=+1033.805571910" Mar 19 16:57:51 crc kubenswrapper[4792]: I0319 16:57:51.534278 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"9ca4cbbd386f8a652ca27c6ccc22b2819570a7d2eee2b0dd08a6bf2c10bbac27"} Mar 19 16:57:51 crc kubenswrapper[4792]: I0319 16:57:51.774035 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce2e3a91-242c-4321-8422-18c803a88921" path="/var/lib/kubelet/pods/ce2e3a91-242c-4321-8422-18c803a88921/volumes" Mar 19 16:57:52 crc kubenswrapper[4792]: I0319 16:57:52.543855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" event={"ID":"1e5dbe4d-6818-4b0d-a372-b9574882f2ad","Type":"ContainerStarted","Data":"4f40869106994e32c3a6185fd9c1cc6a3e9d30e65059292cb60c1b100afaa5ef"} Mar 19 16:57:52 crc kubenswrapper[4792]: I0319 16:57:52.544255 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:52 crc kubenswrapper[4792]: I0319 16:57:52.548469 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" event={"ID":"10c782de-230d-407d-9bb1-2a8a3a8da91c","Type":"ContainerStarted","Data":"4f0d34793fec84f6a1f25255ab8fddc6e93cbb9a5ed14ed7a306475d10e1dc7b"} Mar 19 16:57:52 crc kubenswrapper[4792]: I0319 16:57:52.556476 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:52 crc kubenswrapper[4792]: I0319 16:57:52.573123 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podStartSLOduration=1.445552551 podStartE2EDuration="7.573093095s" podCreationTimestamp="2026-03-19 16:57:45 +0000 UTC" firstStartedPulling="2026-03-19 16:57:45.94274307 +0000 UTC m=+1029.088800600" lastFinishedPulling="2026-03-19 16:57:52.070283604 +0000 UTC m=+1035.216341144" observedRunningTime="2026-03-19 16:57:52.563432089 +0000 UTC m=+1035.709489669" watchObservedRunningTime="2026-03-19 16:57:52.573093095 +0000 UTC m=+1035.719150675" Mar 19 16:57:52 crc kubenswrapper[4792]: I0319 16:57:52.619005 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podStartSLOduration=1.5868728810000001 podStartE2EDuration="7.618979805s" podCreationTimestamp="2026-03-19 16:57:45 +0000 UTC" firstStartedPulling="2026-03-19 16:57:46.04349047 +0000 UTC m=+1029.189548010" lastFinishedPulling="2026-03-19 16:57:52.075597394 +0000 UTC m=+1035.221654934" observedRunningTime="2026-03-19 16:57:52.609470993 +0000 UTC m=+1035.755528533" watchObservedRunningTime="2026-03-19 16:57:52.618979805 +0000 UTC m=+1035.765037355" Mar 19 16:57:53 crc kubenswrapper[4792]: I0319 16:57:53.556354 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:53 crc kubenswrapper[4792]: I0319 16:57:53.556747 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:53 crc kubenswrapper[4792]: I0319 16:57:53.556773 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:57:53 crc kubenswrapper[4792]: I0319 16:57:53.569131 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:53 crc kubenswrapper[4792]: I0319 16:57:53.572625 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" Mar 19 16:57:53 crc kubenswrapper[4792]: I0319 16:57:53.575550 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.126146 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565658-586dw"] Mar 19 16:58:00 crc kubenswrapper[4792]: E0319 16:58:00.126915 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2e3a91-242c-4321-8422-18c803a88921" containerName="extract-utilities" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.126929 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2e3a91-242c-4321-8422-18c803a88921" containerName="extract-utilities" Mar 19 16:58:00 crc kubenswrapper[4792]: E0319 16:58:00.126949 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2e3a91-242c-4321-8422-18c803a88921" containerName="registry-server" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.126956 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2e3a91-242c-4321-8422-18c803a88921" containerName="registry-server" Mar 19 16:58:00 crc kubenswrapper[4792]: E0319 16:58:00.126968 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2e3a91-242c-4321-8422-18c803a88921" containerName="extract-content" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.126974 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2e3a91-242c-4321-8422-18c803a88921" containerName="extract-content" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.127105 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2e3a91-242c-4321-8422-18c803a88921" containerName="registry-server" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.127609 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565658-586dw" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.131093 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.131125 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.131521 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.136513 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565658-586dw"] Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.221717 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55s4s\" (UniqueName: \"kubernetes.io/projected/f138b905-2e1e-42a0-a36c-b1a31b9811bd-kube-api-access-55s4s\") pod \"auto-csr-approver-29565658-586dw\" (UID: \"f138b905-2e1e-42a0-a36c-b1a31b9811bd\") " pod="openshift-infra/auto-csr-approver-29565658-586dw" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.323253 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55s4s\" (UniqueName: \"kubernetes.io/projected/f138b905-2e1e-42a0-a36c-b1a31b9811bd-kube-api-access-55s4s\") pod \"auto-csr-approver-29565658-586dw\" (UID: \"f138b905-2e1e-42a0-a36c-b1a31b9811bd\") " pod="openshift-infra/auto-csr-approver-29565658-586dw" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.350508 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55s4s\" (UniqueName: \"kubernetes.io/projected/f138b905-2e1e-42a0-a36c-b1a31b9811bd-kube-api-access-55s4s\") pod \"auto-csr-approver-29565658-586dw\" (UID: \"f138b905-2e1e-42a0-a36c-b1a31b9811bd\") " pod="openshift-infra/auto-csr-approver-29565658-586dw" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.448768 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565658-586dw" Mar 19 16:58:00 crc kubenswrapper[4792]: I0319 16:58:00.915146 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565658-586dw"] Mar 19 16:58:00 crc kubenswrapper[4792]: W0319 16:58:00.924018 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf138b905_2e1e_42a0_a36c_b1a31b9811bd.slice/crio-5078cdafbe15b319fa75ee382671c71f9e720bf01143e74abecfc55bfc9cd9a0 WatchSource:0}: Error finding container 5078cdafbe15b319fa75ee382671c71f9e720bf01143e74abecfc55bfc9cd9a0: Status 404 returned error can't find the container with id 5078cdafbe15b319fa75ee382671c71f9e720bf01143e74abecfc55bfc9cd9a0 Mar 19 16:58:01 crc kubenswrapper[4792]: I0319 16:58:01.636596 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565658-586dw" event={"ID":"f138b905-2e1e-42a0-a36c-b1a31b9811bd","Type":"ContainerStarted","Data":"5078cdafbe15b319fa75ee382671c71f9e720bf01143e74abecfc55bfc9cd9a0"} Mar 19 16:58:03 crc kubenswrapper[4792]: I0319 16:58:03.664982 4792 generic.go:334] "Generic (PLEG): container finished" podID="f138b905-2e1e-42a0-a36c-b1a31b9811bd" containerID="3244184b4cbceafcb012d54a191bea8048c5c6bebc6ccfca7cf7f7061ab3cc7f" exitCode=0 Mar 19 16:58:03 crc kubenswrapper[4792]: I0319 16:58:03.665115 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565658-586dw" event={"ID":"f138b905-2e1e-42a0-a36c-b1a31b9811bd","Type":"ContainerDied","Data":"3244184b4cbceafcb012d54a191bea8048c5c6bebc6ccfca7cf7f7061ab3cc7f"} Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.017431 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.028610 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565658-586dw" Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.117568 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55s4s\" (UniqueName: \"kubernetes.io/projected/f138b905-2e1e-42a0-a36c-b1a31b9811bd-kube-api-access-55s4s\") pod \"f138b905-2e1e-42a0-a36c-b1a31b9811bd\" (UID: \"f138b905-2e1e-42a0-a36c-b1a31b9811bd\") " Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.124714 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f138b905-2e1e-42a0-a36c-b1a31b9811bd-kube-api-access-55s4s" (OuterVolumeSpecName: "kube-api-access-55s4s") pod "f138b905-2e1e-42a0-a36c-b1a31b9811bd" (UID: "f138b905-2e1e-42a0-a36c-b1a31b9811bd"). InnerVolumeSpecName "kube-api-access-55s4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.219735 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55s4s\" (UniqueName: \"kubernetes.io/projected/f138b905-2e1e-42a0-a36c-b1a31b9811bd-kube-api-access-55s4s\") on node \"crc\" DevicePath \"\"" Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.330065 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.484098 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.682219 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565658-586dw" event={"ID":"f138b905-2e1e-42a0-a36c-b1a31b9811bd","Type":"ContainerDied","Data":"5078cdafbe15b319fa75ee382671c71f9e720bf01143e74abecfc55bfc9cd9a0"} Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.682726 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5078cdafbe15b319fa75ee382671c71f9e720bf01143e74abecfc55bfc9cd9a0" Mar 19 16:58:05 crc kubenswrapper[4792]: I0319 16:58:05.682346 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565658-586dw" Mar 19 16:58:06 crc kubenswrapper[4792]: I0319 16:58:06.148127 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565652-xrx8j"] Mar 19 16:58:06 crc kubenswrapper[4792]: I0319 16:58:06.157577 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565652-xrx8j"] Mar 19 16:58:06 crc kubenswrapper[4792]: I0319 16:58:06.222763 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 19 16:58:06 crc kubenswrapper[4792]: I0319 16:58:06.222836 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b90cdc46-8fb4-424e-be18-e675309acdff" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 19 16:58:06 crc kubenswrapper[4792]: I0319 16:58:06.334041 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 19 16:58:06 crc kubenswrapper[4792]: I0319 16:58:06.520097 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 19 16:58:07 crc kubenswrapper[4792]: I0319 16:58:07.758367 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1eb80a9-4b3a-4977-bb3b-8649c1d7660d" path="/var/lib/kubelet/pods/a1eb80a9-4b3a-4977-bb3b-8649c1d7660d/volumes" Mar 19 16:58:07 crc kubenswrapper[4792]: I0319 16:58:07.965914 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-68rlz"] Mar 19 16:58:07 crc kubenswrapper[4792]: E0319 16:58:07.966753 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f138b905-2e1e-42a0-a36c-b1a31b9811bd" containerName="oc" Mar 19 16:58:07 crc kubenswrapper[4792]: I0319 16:58:07.966810 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f138b905-2e1e-42a0-a36c-b1a31b9811bd" containerName="oc" Mar 19 16:58:07 crc kubenswrapper[4792]: I0319 16:58:07.967189 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f138b905-2e1e-42a0-a36c-b1a31b9811bd" containerName="oc" Mar 19 16:58:07 crc kubenswrapper[4792]: I0319 16:58:07.979670 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:07 crc kubenswrapper[4792]: I0319 16:58:07.995555 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68rlz"] Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.065135 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-utilities\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.065187 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rlb\" (UniqueName: \"kubernetes.io/projected/fa85f440-5252-44eb-b0cb-61609912de42-kube-api-access-25rlb\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.065454 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-catalog-content\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.167397 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-catalog-content\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.167567 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rlb\" (UniqueName: \"kubernetes.io/projected/fa85f440-5252-44eb-b0cb-61609912de42-kube-api-access-25rlb\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.167633 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-utilities\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.168595 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-catalog-content\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.168645 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-utilities\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.191023 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rlb\" (UniqueName: \"kubernetes.io/projected/fa85f440-5252-44eb-b0cb-61609912de42-kube-api-access-25rlb\") pod \"certified-operators-68rlz\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.300899 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:08 crc kubenswrapper[4792]: I0319 16:58:08.753219 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68rlz"] Mar 19 16:58:09 crc kubenswrapper[4792]: I0319 16:58:09.721139 4792 generic.go:334] "Generic (PLEG): container finished" podID="fa85f440-5252-44eb-b0cb-61609912de42" containerID="0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc" exitCode=0 Mar 19 16:58:09 crc kubenswrapper[4792]: I0319 16:58:09.721190 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68rlz" event={"ID":"fa85f440-5252-44eb-b0cb-61609912de42","Type":"ContainerDied","Data":"0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc"} Mar 19 16:58:09 crc kubenswrapper[4792]: I0319 16:58:09.721232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68rlz" event={"ID":"fa85f440-5252-44eb-b0cb-61609912de42","Type":"ContainerStarted","Data":"cd9705140a28ebef07f212ddc40b14d10d2034af68390108bb6d975d3dd720bb"} Mar 19 16:58:10 crc kubenswrapper[4792]: I0319 16:58:10.731822 4792 generic.go:334] "Generic (PLEG): container finished" podID="fa85f440-5252-44eb-b0cb-61609912de42" containerID="fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e" exitCode=0 Mar 19 16:58:10 crc kubenswrapper[4792]: I0319 16:58:10.731868 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68rlz" event={"ID":"fa85f440-5252-44eb-b0cb-61609912de42","Type":"ContainerDied","Data":"fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e"} Mar 19 16:58:11 crc kubenswrapper[4792]: I0319 16:58:11.752603 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68rlz" event={"ID":"fa85f440-5252-44eb-b0cb-61609912de42","Type":"ContainerStarted","Data":"eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f"} Mar 19 16:58:11 crc kubenswrapper[4792]: I0319 16:58:11.772633 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-68rlz" podStartSLOduration=3.341172969 podStartE2EDuration="4.772615167s" podCreationTimestamp="2026-03-19 16:58:07 +0000 UTC" firstStartedPulling="2026-03-19 16:58:09.725038413 +0000 UTC m=+1052.871095963" lastFinishedPulling="2026-03-19 16:58:11.156480621 +0000 UTC m=+1054.302538161" observedRunningTime="2026-03-19 16:58:11.764936796 +0000 UTC m=+1054.910994356" watchObservedRunningTime="2026-03-19 16:58:11.772615167 +0000 UTC m=+1054.918672727" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.255022 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5swgg"] Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.258012 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.263362 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5swgg"] Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.386740 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-utilities\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.386796 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-catalog-content\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.387025 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmrx\" (UniqueName: \"kubernetes.io/projected/ce6e8001-06f3-4f34-8916-b6a7c614dac8-kube-api-access-ksmrx\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.488573 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmrx\" (UniqueName: \"kubernetes.io/projected/ce6e8001-06f3-4f34-8916-b6a7c614dac8-kube-api-access-ksmrx\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.488653 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-utilities\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.488706 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-catalog-content\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.489112 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-utilities\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.489166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-catalog-content\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.507071 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmrx\" (UniqueName: \"kubernetes.io/projected/ce6e8001-06f3-4f34-8916-b6a7c614dac8-kube-api-access-ksmrx\") pod \"community-operators-5swgg\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:15 crc kubenswrapper[4792]: I0319 16:58:15.587739 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:16 crc kubenswrapper[4792]: I0319 16:58:16.086331 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5swgg"] Mar 19 16:58:16 crc kubenswrapper[4792]: I0319 16:58:16.215743 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 19 16:58:16 crc kubenswrapper[4792]: I0319 16:58:16.215820 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b90cdc46-8fb4-424e-be18-e675309acdff" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 19 16:58:16 crc kubenswrapper[4792]: I0319 16:58:16.789174 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerID="c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa" exitCode=0 Mar 19 16:58:16 crc kubenswrapper[4792]: I0319 16:58:16.789243 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5swgg" event={"ID":"ce6e8001-06f3-4f34-8916-b6a7c614dac8","Type":"ContainerDied","Data":"c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa"} Mar 19 16:58:16 crc kubenswrapper[4792]: I0319 16:58:16.789283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5swgg" event={"ID":"ce6e8001-06f3-4f34-8916-b6a7c614dac8","Type":"ContainerStarted","Data":"22c0c1ddb008bf470f2a8ab43b9da15c74a3e6395de6fc614f38ddf2005be795"} Mar 19 16:58:17 crc kubenswrapper[4792]: I0319 16:58:17.797889 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerID="c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7" exitCode=0 Mar 19 16:58:17 crc kubenswrapper[4792]: I0319 16:58:17.797967 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5swgg" event={"ID":"ce6e8001-06f3-4f34-8916-b6a7c614dac8","Type":"ContainerDied","Data":"c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7"} Mar 19 16:58:18 crc kubenswrapper[4792]: I0319 16:58:18.301741 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:18 crc kubenswrapper[4792]: I0319 16:58:18.302118 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:18 crc kubenswrapper[4792]: I0319 16:58:18.343459 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:18 crc kubenswrapper[4792]: I0319 16:58:18.888253 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:19 crc kubenswrapper[4792]: I0319 16:58:19.813992 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5swgg" event={"ID":"ce6e8001-06f3-4f34-8916-b6a7c614dac8","Type":"ContainerStarted","Data":"1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884"} Mar 19 16:58:19 crc kubenswrapper[4792]: I0319 16:58:19.836760 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5swgg" podStartSLOduration=2.738927938 podStartE2EDuration="4.836741959s" podCreationTimestamp="2026-03-19 16:58:15 +0000 UTC" firstStartedPulling="2026-03-19 16:58:16.792579091 +0000 UTC m=+1059.938636631" lastFinishedPulling="2026-03-19 16:58:18.890393082 +0000 UTC m=+1062.036450652" observedRunningTime="2026-03-19 16:58:19.830424396 +0000 UTC m=+1062.976481936" watchObservedRunningTime="2026-03-19 16:58:19.836741959 +0000 UTC m=+1062.982799499" Mar 19 16:58:20 crc kubenswrapper[4792]: I0319 16:58:20.608928 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68rlz"] Mar 19 16:58:21 crc kubenswrapper[4792]: I0319 16:58:21.132938 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-68rlz" podUID="fa85f440-5252-44eb-b0cb-61609912de42" containerName="registry-server" containerID="cri-o://eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f" gracePeriod=2 Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.001311 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.105721 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-catalog-content\") pod \"fa85f440-5252-44eb-b0cb-61609912de42\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.105888 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25rlb\" (UniqueName: \"kubernetes.io/projected/fa85f440-5252-44eb-b0cb-61609912de42-kube-api-access-25rlb\") pod \"fa85f440-5252-44eb-b0cb-61609912de42\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.105932 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-utilities\") pod \"fa85f440-5252-44eb-b0cb-61609912de42\" (UID: \"fa85f440-5252-44eb-b0cb-61609912de42\") " Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.107890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-utilities" (OuterVolumeSpecName: "utilities") pod "fa85f440-5252-44eb-b0cb-61609912de42" (UID: "fa85f440-5252-44eb-b0cb-61609912de42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.111511 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa85f440-5252-44eb-b0cb-61609912de42-kube-api-access-25rlb" (OuterVolumeSpecName: "kube-api-access-25rlb") pod "fa85f440-5252-44eb-b0cb-61609912de42" (UID: "fa85f440-5252-44eb-b0cb-61609912de42"). InnerVolumeSpecName "kube-api-access-25rlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.144000 4792 generic.go:334] "Generic (PLEG): container finished" podID="fa85f440-5252-44eb-b0cb-61609912de42" containerID="eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f" exitCode=0 Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.144064 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68rlz" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.144064 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68rlz" event={"ID":"fa85f440-5252-44eb-b0cb-61609912de42","Type":"ContainerDied","Data":"eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f"} Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.144782 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68rlz" event={"ID":"fa85f440-5252-44eb-b0cb-61609912de42","Type":"ContainerDied","Data":"cd9705140a28ebef07f212ddc40b14d10d2034af68390108bb6d975d3dd720bb"} Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.144906 4792 scope.go:117] "RemoveContainer" containerID="eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.159685 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa85f440-5252-44eb-b0cb-61609912de42" (UID: "fa85f440-5252-44eb-b0cb-61609912de42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.164878 4792 scope.go:117] "RemoveContainer" containerID="fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.180898 4792 scope.go:117] "RemoveContainer" containerID="0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.204818 4792 scope.go:117] "RemoveContainer" containerID="eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f" Mar 19 16:58:22 crc kubenswrapper[4792]: E0319 16:58:22.205158 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f\": container with ID starting with eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f not found: ID does not exist" containerID="eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.205190 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f"} err="failed to get container status \"eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f\": rpc error: code = NotFound desc = could not find container \"eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f\": container with ID starting with eaa8917cd7a9aa8cfaea7b082e46638d58292ddcca4b83b8cf19cb809ca49f4f not found: ID does not exist" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.205210 4792 scope.go:117] "RemoveContainer" containerID="fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e" Mar 19 16:58:22 crc kubenswrapper[4792]: E0319 16:58:22.205420 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e\": container with ID starting with fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e not found: ID does not exist" containerID="fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.205444 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e"} err="failed to get container status \"fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e\": rpc error: code = NotFound desc = could not find container \"fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e\": container with ID starting with fe0b5c24513a8851ab181d6204ae9dbcdba460c29613d8b59f89e98fc38f142e not found: ID does not exist" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.205458 4792 scope.go:117] "RemoveContainer" containerID="0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc" Mar 19 16:58:22 crc kubenswrapper[4792]: E0319 16:58:22.205690 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc\": container with ID starting with 0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc not found: ID does not exist" containerID="0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.205711 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc"} err="failed to get container status \"0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc\": rpc error: code = NotFound desc = could not find container \"0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc\": container with ID starting with 0ac66cacdd7c235bfd2c300ec3c39f393725225c96389aa3e3360f8b9bb1eabc not found: ID does not exist" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.207726 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.207765 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25rlb\" (UniqueName: \"kubernetes.io/projected/fa85f440-5252-44eb-b0cb-61609912de42-kube-api-access-25rlb\") on node \"crc\" DevicePath \"\"" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.207780 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85f440-5252-44eb-b0cb-61609912de42-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.475371 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68rlz"] Mar 19 16:58:22 crc kubenswrapper[4792]: I0319 16:58:22.481382 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-68rlz"] Mar 19 16:58:23 crc kubenswrapper[4792]: I0319 16:58:23.753484 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa85f440-5252-44eb-b0cb-61609912de42" path="/var/lib/kubelet/pods/fa85f440-5252-44eb-b0cb-61609912de42/volumes" Mar 19 16:58:25 crc kubenswrapper[4792]: I0319 16:58:25.588717 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:25 crc kubenswrapper[4792]: I0319 16:58:25.589138 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:25 crc kubenswrapper[4792]: I0319 16:58:25.650424 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:26 crc kubenswrapper[4792]: I0319 16:58:26.215280 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 19 16:58:26 crc kubenswrapper[4792]: I0319 16:58:26.215776 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b90cdc46-8fb4-424e-be18-e675309acdff" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 19 16:58:26 crc kubenswrapper[4792]: I0319 16:58:26.271874 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:26 crc kubenswrapper[4792]: I0319 16:58:26.321133 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5swgg"] Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.224450 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5swgg" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerName="registry-server" containerID="cri-o://1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884" gracePeriod=2 Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.623960 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.714031 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-utilities\") pod \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.714100 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmrx\" (UniqueName: \"kubernetes.io/projected/ce6e8001-06f3-4f34-8916-b6a7c614dac8-kube-api-access-ksmrx\") pod \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.714403 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-catalog-content\") pod \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\" (UID: \"ce6e8001-06f3-4f34-8916-b6a7c614dac8\") " Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.715387 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-utilities" (OuterVolumeSpecName: "utilities") pod "ce6e8001-06f3-4f34-8916-b6a7c614dac8" (UID: "ce6e8001-06f3-4f34-8916-b6a7c614dac8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.717704 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.731559 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6e8001-06f3-4f34-8916-b6a7c614dac8-kube-api-access-ksmrx" (OuterVolumeSpecName: "kube-api-access-ksmrx") pod "ce6e8001-06f3-4f34-8916-b6a7c614dac8" (UID: "ce6e8001-06f3-4f34-8916-b6a7c614dac8"). InnerVolumeSpecName "kube-api-access-ksmrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.819971 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksmrx\" (UniqueName: \"kubernetes.io/projected/ce6e8001-06f3-4f34-8916-b6a7c614dac8-kube-api-access-ksmrx\") on node \"crc\" DevicePath \"\"" Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.909106 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce6e8001-06f3-4f34-8916-b6a7c614dac8" (UID: "ce6e8001-06f3-4f34-8916-b6a7c614dac8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:58:28 crc kubenswrapper[4792]: I0319 16:58:28.921560 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6e8001-06f3-4f34-8916-b6a7c614dac8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.235585 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerID="1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884" exitCode=0 Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.235662 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5swgg" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.235655 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5swgg" event={"ID":"ce6e8001-06f3-4f34-8916-b6a7c614dac8","Type":"ContainerDied","Data":"1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884"} Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.235792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5swgg" event={"ID":"ce6e8001-06f3-4f34-8916-b6a7c614dac8","Type":"ContainerDied","Data":"22c0c1ddb008bf470f2a8ab43b9da15c74a3e6395de6fc614f38ddf2005be795"} Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.235827 4792 scope.go:117] "RemoveContainer" containerID="1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.273421 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5swgg"] Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.280469 4792 scope.go:117] "RemoveContainer" containerID="c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.296712 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5swgg"] Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.302554 4792 scope.go:117] "RemoveContainer" containerID="c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.324268 4792 scope.go:117] "RemoveContainer" containerID="1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884" Mar 19 16:58:29 crc kubenswrapper[4792]: E0319 16:58:29.324830 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884\": container with ID starting with 1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884 not found: ID does not exist" containerID="1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.324893 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884"} err="failed to get container status \"1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884\": rpc error: code = NotFound desc = could not find container \"1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884\": container with ID starting with 1f558b598f2f1daf5b38f2c98649bfd20a1d73a367a7fc91119fd2d602208884 not found: ID does not exist" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.324919 4792 scope.go:117] "RemoveContainer" containerID="c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7" Mar 19 16:58:29 crc kubenswrapper[4792]: E0319 16:58:29.325370 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7\": container with ID starting with c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7 not found: ID does not exist" containerID="c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.325465 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7"} err="failed to get container status \"c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7\": rpc error: code = NotFound desc = could not find container \"c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7\": container with ID starting with c09855f7b1f881d1254bc78b398560de2e69814d918ac8f8ea20a4bb99d08ba7 not found: ID does not exist" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.325590 4792 scope.go:117] "RemoveContainer" containerID="c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa" Mar 19 16:58:29 crc kubenswrapper[4792]: E0319 16:58:29.326111 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa\": container with ID starting with c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa not found: ID does not exist" containerID="c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.326188 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa"} err="failed to get container status \"c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa\": rpc error: code = NotFound desc = could not find container \"c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa\": container with ID starting with c65d65261f64007a3af070f7385ddd3016e6e9c874882f81be2fc48b5d21b4fa not found: ID does not exist" Mar 19 16:58:29 crc kubenswrapper[4792]: I0319 16:58:29.749689 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" path="/var/lib/kubelet/pods/ce6e8001-06f3-4f34-8916-b6a7c614dac8/volumes" Mar 19 16:58:36 crc kubenswrapper[4792]: I0319 16:58:36.212264 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 19 16:58:36 crc kubenswrapper[4792]: I0319 16:58:36.212919 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b90cdc46-8fb4-424e-be18-e675309acdff" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 19 16:58:39 crc kubenswrapper[4792]: I0319 16:58:39.829935 4792 scope.go:117] "RemoveContainer" containerID="7fdd279a395cc0f66290c42b86acc34f221e06435747a85e7cac1e106212bc19" Mar 19 16:58:46 crc kubenswrapper[4792]: I0319 16:58:46.215158 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.196401 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-fvtb2"] Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.197537 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerName="registry-server" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.197558 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerName="registry-server" Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.197583 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85f440-5252-44eb-b0cb-61609912de42" containerName="extract-utilities" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.197591 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85f440-5252-44eb-b0cb-61609912de42" containerName="extract-utilities" Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.197602 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85f440-5252-44eb-b0cb-61609912de42" containerName="registry-server" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.197609 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85f440-5252-44eb-b0cb-61609912de42" containerName="registry-server" Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.197619 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerName="extract-content" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.197626 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerName="extract-content" Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.197635 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerName="extract-utilities" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.197642 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerName="extract-utilities" Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.197664 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85f440-5252-44eb-b0cb-61609912de42" containerName="extract-content" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.197671 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85f440-5252-44eb-b0cb-61609912de42" containerName="extract-content" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.197869 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa85f440-5252-44eb-b0cb-61609912de42" containerName="registry-server" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.197889 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6e8001-06f3-4f34-8916-b6a7c614dac8" containerName="registry-server" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.198730 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.201773 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.202027 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.202204 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-9z6pv" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.202299 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.202442 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.212388 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.217573 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-fvtb2"] Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.355005 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-fvtb2"] Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.356147 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-vmw92 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-fvtb2" podUID="6b7f7bd1-52e3-4000-ae08-7b06f28ced14" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397034 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-tmp\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397159 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-token\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config-openshift-service-cacrt\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397397 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-entrypoint\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397427 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-datadir\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397685 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-trusted-ca\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397710 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmw92\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-kube-api-access-vmw92\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397738 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-sa-token\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.397772 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499610 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-trusted-ca\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499654 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmw92\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-kube-api-access-vmw92\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499677 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-sa-token\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499700 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499755 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-tmp\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499780 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-token\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499809 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499828 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config-openshift-service-cacrt\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499862 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-entrypoint\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.499886 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-datadir\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.499980 4792 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.500046 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics podName:6b7f7bd1-52e3-4000-ae08-7b06f28ced14 nodeName:}" failed. No retries permitted until 2026-03-19 16:59:04.000025074 +0000 UTC m=+1107.146082614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics") pod "collector-fvtb2" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14") : secret "collector-metrics" not found Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.500049 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-datadir\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.500782 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config-openshift-service-cacrt\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.500877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.500940 4792 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Mar 19 16:59:03 crc kubenswrapper[4792]: E0319 16:59:03.500979 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver podName:6b7f7bd1-52e3-4000-ae08-7b06f28ced14 nodeName:}" failed. No retries permitted until 2026-03-19 16:59:04.000968879 +0000 UTC m=+1107.147026419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver") pod "collector-fvtb2" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14") : secret "collector-syslog-receiver" not found Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.501035 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-trusted-ca\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.501098 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-entrypoint\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.506680 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-tmp\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.509295 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-token\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.523594 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmw92\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-kube-api-access-vmw92\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:03 crc kubenswrapper[4792]: I0319 16:59:03.526240 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-sa-token\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.006544 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fvtb2" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.008789 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.009278 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.013291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.019299 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver\") pod \"collector-fvtb2\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " pod="openshift-logging/collector-fvtb2" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.044831 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fvtb2" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.110171 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-token\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.110452 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-tmp\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.110549 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.110655 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-datadir\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.110773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmw92\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-kube-api-access-vmw92\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.110945 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.111057 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-trusted-ca\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.111191 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config-openshift-service-cacrt\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.111107 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-datadir" (OuterVolumeSpecName: "datadir") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.111419 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-entrypoint\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.111548 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.111663 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-sa-token\") pod \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\" (UID: \"6b7f7bd1-52e3-4000-ae08-7b06f28ced14\") " Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.111910 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.111930 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.112004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.112229 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config" (OuterVolumeSpecName: "config") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.112663 4792 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-datadir\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.112766 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.112872 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.112972 4792 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.113151 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-tmp" (OuterVolumeSpecName: "tmp") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.113162 4792 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-entrypoint\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.113705 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.114071 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-token" (OuterVolumeSpecName: "collector-token") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.114603 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-kube-api-access-vmw92" (OuterVolumeSpecName: "kube-api-access-vmw92") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "kube-api-access-vmw92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.115101 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-sa-token" (OuterVolumeSpecName: "sa-token") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.115898 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics" (OuterVolumeSpecName: "metrics") pod "6b7f7bd1-52e3-4000-ae08-7b06f28ced14" (UID: "6b7f7bd1-52e3-4000-ae08-7b06f28ced14"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.215132 4792 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-sa-token\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.215186 4792 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-token\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.215204 4792 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-tmp\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.215218 4792 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.215232 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmw92\" (UniqueName: \"kubernetes.io/projected/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-kube-api-access-vmw92\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:04 crc kubenswrapper[4792]: I0319 16:59:04.215246 4792 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6b7f7bd1-52e3-4000-ae08-7b06f28ced14-metrics\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.012291 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fvtb2" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.067127 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-fvtb2"] Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.077354 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-fvtb2"] Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.083582 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rqcnx"] Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.084788 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.086311 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.086905 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.087102 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-9z6pv" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.087264 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.090509 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rqcnx"] Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.090925 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.094112 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.130971 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-entrypoint\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131029 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a879f867-df69-4895-836e-59a2c3333716-datadir\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131058 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-collector-token\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-trusted-ca\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131335 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-config-openshift-service-cacrt\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131409 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-collector-syslog-receiver\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcz99\" (UniqueName: \"kubernetes.io/projected/a879f867-df69-4895-836e-59a2c3333716-kube-api-access-mcz99\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131476 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a879f867-df69-4895-836e-59a2c3333716-tmp\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131538 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a879f867-df69-4895-836e-59a2c3333716-sa-token\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131574 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-config\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.131623 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-metrics\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233561 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a879f867-df69-4895-836e-59a2c3333716-sa-token\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233619 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-config\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233663 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-metrics\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233727 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-entrypoint\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233753 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a879f867-df69-4895-836e-59a2c3333716-datadir\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-collector-token\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233829 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-trusted-ca\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233907 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a879f867-df69-4895-836e-59a2c3333716-datadir\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.233914 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-config-openshift-service-cacrt\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.234067 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-collector-syslog-receiver\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.234104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcz99\" (UniqueName: \"kubernetes.io/projected/a879f867-df69-4895-836e-59a2c3333716-kube-api-access-mcz99\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.234180 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a879f867-df69-4895-836e-59a2c3333716-tmp\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.234534 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-config\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.234666 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-entrypoint\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.234868 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-trusted-ca\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.235037 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a879f867-df69-4895-836e-59a2c3333716-config-openshift-service-cacrt\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.237151 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a879f867-df69-4895-836e-59a2c3333716-tmp\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.237919 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-collector-token\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.239215 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-collector-syslog-receiver\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.250744 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a879f867-df69-4895-836e-59a2c3333716-sa-token\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.251193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a879f867-df69-4895-836e-59a2c3333716-metrics\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.252447 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcz99\" (UniqueName: \"kubernetes.io/projected/a879f867-df69-4895-836e-59a2c3333716-kube-api-access-mcz99\") pod \"collector-rqcnx\" (UID: \"a879f867-df69-4895-836e-59a2c3333716\") " pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.403883 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rqcnx" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.749443 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b7f7bd1-52e3-4000-ae08-7b06f28ced14" path="/var/lib/kubelet/pods/6b7f7bd1-52e3-4000-ae08-7b06f28ced14/volumes" Mar 19 16:59:05 crc kubenswrapper[4792]: I0319 16:59:05.821041 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rqcnx"] Mar 19 16:59:05 crc kubenswrapper[4792]: W0319 16:59:05.822109 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda879f867_df69_4895_836e_59a2c3333716.slice/crio-3c116a57dce6e275dda2cb1a73a55f7c16294f1a5d25c802a5bf484baecef724 WatchSource:0}: Error finding container 3c116a57dce6e275dda2cb1a73a55f7c16294f1a5d25c802a5bf484baecef724: Status 404 returned error can't find the container with id 3c116a57dce6e275dda2cb1a73a55f7c16294f1a5d25c802a5bf484baecef724 Mar 19 16:59:06 crc kubenswrapper[4792]: I0319 16:59:06.021694 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-rqcnx" event={"ID":"a879f867-df69-4895-836e-59a2c3333716","Type":"ContainerStarted","Data":"3c116a57dce6e275dda2cb1a73a55f7c16294f1a5d25c802a5bf484baecef724"} Mar 19 16:59:10 crc kubenswrapper[4792]: I0319 16:59:10.057403 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-rqcnx" event={"ID":"a879f867-df69-4895-836e-59a2c3333716","Type":"ContainerStarted","Data":"2d1c12dfd414a13170bf23adf24120aa25f0e94cf9bbf9e837dae6788dda0738"} Mar 19 16:59:10 crc kubenswrapper[4792]: I0319 16:59:10.080157 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-rqcnx" podStartSLOduration=1.925974155 podStartE2EDuration="5.080135395s" podCreationTimestamp="2026-03-19 16:59:05 +0000 UTC" firstStartedPulling="2026-03-19 16:59:05.827524815 +0000 UTC m=+1108.973582345" lastFinishedPulling="2026-03-19 16:59:08.981686055 +0000 UTC m=+1112.127743585" observedRunningTime="2026-03-19 16:59:10.0770009 +0000 UTC m=+1113.223058460" watchObservedRunningTime="2026-03-19 16:59:10.080135395 +0000 UTC m=+1113.226192945" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.086824 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz"] Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.089029 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.091216 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.098436 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz"] Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.124653 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.124798 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.124858 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwth\" (UniqueName: \"kubernetes.io/projected/e62f40d4-2108-4fee-a475-0ac60aa24d1b-kube-api-access-wjwth\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.226106 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.226177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwth\" (UniqueName: \"kubernetes.io/projected/e62f40d4-2108-4fee-a475-0ac60aa24d1b-kube-api-access-wjwth\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.226229 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.226706 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.226742 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.245131 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwth\" (UniqueName: \"kubernetes.io/projected/e62f40d4-2108-4fee-a475-0ac60aa24d1b-kube-api-access-wjwth\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.416336 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:42 crc kubenswrapper[4792]: I0319 16:59:42.816551 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz"] Mar 19 16:59:43 crc kubenswrapper[4792]: I0319 16:59:43.294099 4792 generic.go:334] "Generic (PLEG): container finished" podID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerID="7dcd2824eaa191b902b002cc67b903a5eaf20ef465c7c434461cdde0f7d1e6ad" exitCode=0 Mar 19 16:59:43 crc kubenswrapper[4792]: I0319 16:59:43.294919 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" event={"ID":"e62f40d4-2108-4fee-a475-0ac60aa24d1b","Type":"ContainerDied","Data":"7dcd2824eaa191b902b002cc67b903a5eaf20ef465c7c434461cdde0f7d1e6ad"} Mar 19 16:59:43 crc kubenswrapper[4792]: I0319 16:59:43.295007 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" event={"ID":"e62f40d4-2108-4fee-a475-0ac60aa24d1b","Type":"ContainerStarted","Data":"b913e3e83a3eb9fbc619d433692a101f0b04d3787b63cf2d9479117371666ccd"} Mar 19 16:59:45 crc kubenswrapper[4792]: I0319 16:59:45.307520 4792 generic.go:334] "Generic (PLEG): container finished" podID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerID="69fe74794af956db3f5f3c6b87d80aec61422982f69bfb1dbf44903b7cb5a256" exitCode=0 Mar 19 16:59:45 crc kubenswrapper[4792]: I0319 16:59:45.307624 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" event={"ID":"e62f40d4-2108-4fee-a475-0ac60aa24d1b","Type":"ContainerDied","Data":"69fe74794af956db3f5f3c6b87d80aec61422982f69bfb1dbf44903b7cb5a256"} Mar 19 16:59:46 crc kubenswrapper[4792]: I0319 16:59:46.316181 4792 generic.go:334] "Generic (PLEG): container finished" podID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerID="aba4d7a7984872592cb6d6f303f16149e6c03bd7beb5509ae50183d6590177a8" exitCode=0 Mar 19 16:59:46 crc kubenswrapper[4792]: I0319 16:59:46.316313 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" event={"ID":"e62f40d4-2108-4fee-a475-0ac60aa24d1b","Type":"ContainerDied","Data":"aba4d7a7984872592cb6d6f303f16149e6c03bd7beb5509ae50183d6590177a8"} Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.545381 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.715050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-bundle\") pod \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.715143 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-util\") pod \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.715182 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjwth\" (UniqueName: \"kubernetes.io/projected/e62f40d4-2108-4fee-a475-0ac60aa24d1b-kube-api-access-wjwth\") pod \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\" (UID: \"e62f40d4-2108-4fee-a475-0ac60aa24d1b\") " Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.715787 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-bundle" (OuterVolumeSpecName: "bundle") pod "e62f40d4-2108-4fee-a475-0ac60aa24d1b" (UID: "e62f40d4-2108-4fee-a475-0ac60aa24d1b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.726793 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62f40d4-2108-4fee-a475-0ac60aa24d1b-kube-api-access-wjwth" (OuterVolumeSpecName: "kube-api-access-wjwth") pod "e62f40d4-2108-4fee-a475-0ac60aa24d1b" (UID: "e62f40d4-2108-4fee-a475-0ac60aa24d1b"). InnerVolumeSpecName "kube-api-access-wjwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.729422 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-util" (OuterVolumeSpecName: "util") pod "e62f40d4-2108-4fee-a475-0ac60aa24d1b" (UID: "e62f40d4-2108-4fee-a475-0ac60aa24d1b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.817357 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.817392 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62f40d4-2108-4fee-a475-0ac60aa24d1b-util\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:47 crc kubenswrapper[4792]: I0319 16:59:47.817443 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjwth\" (UniqueName: \"kubernetes.io/projected/e62f40d4-2108-4fee-a475-0ac60aa24d1b-kube-api-access-wjwth\") on node \"crc\" DevicePath \"\"" Mar 19 16:59:48 crc kubenswrapper[4792]: I0319 16:59:48.334017 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" event={"ID":"e62f40d4-2108-4fee-a475-0ac60aa24d1b","Type":"ContainerDied","Data":"b913e3e83a3eb9fbc619d433692a101f0b04d3787b63cf2d9479117371666ccd"} Mar 19 16:59:48 crc kubenswrapper[4792]: I0319 16:59:48.334344 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b913e3e83a3eb9fbc619d433692a101f0b04d3787b63cf2d9479117371666ccd" Mar 19 16:59:48 crc kubenswrapper[4792]: I0319 16:59:48.334104 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz" Mar 19 16:59:50 crc kubenswrapper[4792]: I0319 16:59:50.230698 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 16:59:50 crc kubenswrapper[4792]: I0319 16:59:50.232437 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.967037 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk"] Mar 19 16:59:53 crc kubenswrapper[4792]: E0319 16:59:53.967531 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerName="util" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.967542 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerName="util" Mar 19 16:59:53 crc kubenswrapper[4792]: E0319 16:59:53.967554 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerName="pull" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.967559 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerName="pull" Mar 19 16:59:53 crc kubenswrapper[4792]: E0319 16:59:53.967579 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerName="extract" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.967585 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerName="extract" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.967733 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62f40d4-2108-4fee-a475-0ac60aa24d1b" containerName="extract" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.968242 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.970069 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.970307 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4n8gz" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.974646 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 19 16:59:53 crc kubenswrapper[4792]: I0319 16:59:53.984740 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk"] Mar 19 16:59:54 crc kubenswrapper[4792]: I0319 16:59:54.107934 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ffpw\" (UniqueName: \"kubernetes.io/projected/ee465a76-03de-4983-9c1f-a064e12aed69-kube-api-access-2ffpw\") pod \"nmstate-operator-796d4cfff4-gzvjk\" (UID: \"ee465a76-03de-4983-9c1f-a064e12aed69\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk" Mar 19 16:59:54 crc kubenswrapper[4792]: I0319 16:59:54.209894 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ffpw\" (UniqueName: \"kubernetes.io/projected/ee465a76-03de-4983-9c1f-a064e12aed69-kube-api-access-2ffpw\") pod \"nmstate-operator-796d4cfff4-gzvjk\" (UID: \"ee465a76-03de-4983-9c1f-a064e12aed69\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk" Mar 19 16:59:54 crc kubenswrapper[4792]: I0319 16:59:54.228602 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ffpw\" (UniqueName: \"kubernetes.io/projected/ee465a76-03de-4983-9c1f-a064e12aed69-kube-api-access-2ffpw\") pod \"nmstate-operator-796d4cfff4-gzvjk\" (UID: \"ee465a76-03de-4983-9c1f-a064e12aed69\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk" Mar 19 16:59:54 crc kubenswrapper[4792]: I0319 16:59:54.282450 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk" Mar 19 16:59:54 crc kubenswrapper[4792]: I0319 16:59:54.738464 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk"] Mar 19 16:59:55 crc kubenswrapper[4792]: I0319 16:59:55.383023 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk" event={"ID":"ee465a76-03de-4983-9c1f-a064e12aed69","Type":"ContainerStarted","Data":"3471e7d8af19ed3ad27ecbeab382c6d6e05f83fcaeae9b3f101c4d69c5e7b928"} Mar 19 16:59:58 crc kubenswrapper[4792]: I0319 16:59:58.406645 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk" event={"ID":"ee465a76-03de-4983-9c1f-a064e12aed69","Type":"ContainerStarted","Data":"9e248257d92aa75cf1220951bf09e1ecc25c8f4637680f859f2bc314635875fa"} Mar 19 16:59:58 crc kubenswrapper[4792]: I0319 16:59:58.430614 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gzvjk" podStartSLOduration=2.621016846 podStartE2EDuration="5.430591312s" podCreationTimestamp="2026-03-19 16:59:53 +0000 UTC" firstStartedPulling="2026-03-19 16:59:54.738559085 +0000 UTC m=+1157.884616625" lastFinishedPulling="2026-03-19 16:59:57.548133551 +0000 UTC m=+1160.694191091" observedRunningTime="2026-03-19 16:59:58.423606633 +0000 UTC m=+1161.569664173" watchObservedRunningTime="2026-03-19 16:59:58.430591312 +0000 UTC m=+1161.576648852" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.132399 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565660-q64d4"] Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.133892 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565660-q64d4" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.135548 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.136595 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.136862 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.137079 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt"] Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.138260 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.139777 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.143831 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.144040 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt"] Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.161640 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565660-q64d4"] Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.320692 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be29048b-fc27-449a-abb9-b15ef75ca132-secret-volume\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.320757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be29048b-fc27-449a-abb9-b15ef75ca132-config-volume\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.320816 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5gxt\" (UniqueName: \"kubernetes.io/projected/86f927d1-51a8-41d6-a503-1967b4fd9561-kube-api-access-z5gxt\") pod \"auto-csr-approver-29565660-q64d4\" (UID: \"86f927d1-51a8-41d6-a503-1967b4fd9561\") " pod="openshift-infra/auto-csr-approver-29565660-q64d4" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.321272 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv2gt\" (UniqueName: \"kubernetes.io/projected/be29048b-fc27-449a-abb9-b15ef75ca132-kube-api-access-wv2gt\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.423117 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv2gt\" (UniqueName: \"kubernetes.io/projected/be29048b-fc27-449a-abb9-b15ef75ca132-kube-api-access-wv2gt\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.423224 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be29048b-fc27-449a-abb9-b15ef75ca132-secret-volume\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.423250 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be29048b-fc27-449a-abb9-b15ef75ca132-config-volume\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.423295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5gxt\" (UniqueName: \"kubernetes.io/projected/86f927d1-51a8-41d6-a503-1967b4fd9561-kube-api-access-z5gxt\") pod \"auto-csr-approver-29565660-q64d4\" (UID: \"86f927d1-51a8-41d6-a503-1967b4fd9561\") " pod="openshift-infra/auto-csr-approver-29565660-q64d4" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.424391 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be29048b-fc27-449a-abb9-b15ef75ca132-config-volume\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.432158 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be29048b-fc27-449a-abb9-b15ef75ca132-secret-volume\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.449760 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv2gt\" (UniqueName: \"kubernetes.io/projected/be29048b-fc27-449a-abb9-b15ef75ca132-kube-api-access-wv2gt\") pod \"collect-profiles-29565660-2rpqt\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.457229 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5gxt\" (UniqueName: \"kubernetes.io/projected/86f927d1-51a8-41d6-a503-1967b4fd9561-kube-api-access-z5gxt\") pod \"auto-csr-approver-29565660-q64d4\" (UID: \"86f927d1-51a8-41d6-a503-1967b4fd9561\") " pod="openshift-infra/auto-csr-approver-29565660-q64d4" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.466245 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:00 crc kubenswrapper[4792]: I0319 17:00:00.754688 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565660-q64d4" Mar 19 17:00:01 crc kubenswrapper[4792]: I0319 17:00:01.756491 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt"] Mar 19 17:00:01 crc kubenswrapper[4792]: I0319 17:00:01.914818 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565660-q64d4"] Mar 19 17:00:01 crc kubenswrapper[4792]: W0319 17:00:01.918991 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f927d1_51a8_41d6_a503_1967b4fd9561.slice/crio-32854c6bdada7a0b80beacf875069eb09265c13b4f68b4427118298ea60bfeca WatchSource:0}: Error finding container 32854c6bdada7a0b80beacf875069eb09265c13b4f68b4427118298ea60bfeca: Status 404 returned error can't find the container with id 32854c6bdada7a0b80beacf875069eb09265c13b4f68b4427118298ea60bfeca Mar 19 17:00:02 crc kubenswrapper[4792]: I0319 17:00:02.449950 4792 generic.go:334] "Generic (PLEG): container finished" podID="be29048b-fc27-449a-abb9-b15ef75ca132" containerID="2dd77ce8f6ee3728227df8d49b4267c7afa147e80400b20834e19c9269be6d44" exitCode=0 Mar 19 17:00:02 crc kubenswrapper[4792]: I0319 17:00:02.450040 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" event={"ID":"be29048b-fc27-449a-abb9-b15ef75ca132","Type":"ContainerDied","Data":"2dd77ce8f6ee3728227df8d49b4267c7afa147e80400b20834e19c9269be6d44"} Mar 19 17:00:02 crc kubenswrapper[4792]: I0319 17:00:02.450355 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" event={"ID":"be29048b-fc27-449a-abb9-b15ef75ca132","Type":"ContainerStarted","Data":"4ff92b507e1e795fa91d24149f28d70b01a232b8545a3db9d9112878faee69d5"} Mar 19 17:00:02 crc kubenswrapper[4792]: I0319 17:00:02.451700 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565660-q64d4" event={"ID":"86f927d1-51a8-41d6-a503-1967b4fd9561","Type":"ContainerStarted","Data":"32854c6bdada7a0b80beacf875069eb09265c13b4f68b4427118298ea60bfeca"} Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.037281 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-954lx"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.038871 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.042824 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2w2wj" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.053318 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-sjth6"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.054767 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.057168 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.077778 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-954lx"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.098901 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-sjth6"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.127663 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mmsmp"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.128853 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.173136 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bk4d\" (UniqueName: \"kubernetes.io/projected/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-kube-api-access-6bk4d\") pod \"nmstate-webhook-5f558f5558-sjth6\" (UID: \"9d86fdf3-73d9-48f7-b44f-6182252fc4f8\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.173247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw6qd\" (UniqueName: \"kubernetes.io/projected/4ab3aea4-d9a5-42e1-9c73-435f5c722cbb-kube-api-access-mw6qd\") pod \"nmstate-metrics-9b8c8685d-954lx\" (UID: \"4ab3aea4-d9a5-42e1-9c73-435f5c722cbb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.173324 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-sjth6\" (UID: \"9d86fdf3-73d9-48f7-b44f-6182252fc4f8\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.249827 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.251103 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.261450 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lzbc5" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.261748 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.269174 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.276657 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-ovs-socket\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.276725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bk4d\" (UniqueName: \"kubernetes.io/projected/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-kube-api-access-6bk4d\") pod \"nmstate-webhook-5f558f5558-sjth6\" (UID: \"9d86fdf3-73d9-48f7-b44f-6182252fc4f8\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.276833 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-dbus-socket\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.276883 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-nmstate-lock\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.279136 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.284444 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw6qd\" (UniqueName: \"kubernetes.io/projected/4ab3aea4-d9a5-42e1-9c73-435f5c722cbb-kube-api-access-mw6qd\") pod \"nmstate-metrics-9b8c8685d-954lx\" (UID: \"4ab3aea4-d9a5-42e1-9c73-435f5c722cbb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.284988 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88k2\" (UniqueName: \"kubernetes.io/projected/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-kube-api-access-k88k2\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.285024 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-sjth6\" (UID: \"9d86fdf3-73d9-48f7-b44f-6182252fc4f8\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:03 crc kubenswrapper[4792]: E0319 17:00:03.285208 4792 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 19 17:00:03 crc kubenswrapper[4792]: E0319 17:00:03.286543 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-tls-key-pair podName:9d86fdf3-73d9-48f7-b44f-6182252fc4f8 nodeName:}" failed. No retries permitted until 2026-03-19 17:00:03.785234029 +0000 UTC m=+1166.931291569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-tls-key-pair") pod "nmstate-webhook-5f558f5558-sjth6" (UID: "9d86fdf3-73d9-48f7-b44f-6182252fc4f8") : secret "openshift-nmstate-webhook" not found Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.299779 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bk4d\" (UniqueName: \"kubernetes.io/projected/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-kube-api-access-6bk4d\") pod \"nmstate-webhook-5f558f5558-sjth6\" (UID: \"9d86fdf3-73d9-48f7-b44f-6182252fc4f8\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.313564 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw6qd\" (UniqueName: \"kubernetes.io/projected/4ab3aea4-d9a5-42e1-9c73-435f5c722cbb-kube-api-access-mw6qd\") pod \"nmstate-metrics-9b8c8685d-954lx\" (UID: \"4ab3aea4-d9a5-42e1-9c73-435f5c722cbb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.369022 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386560 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-dbus-socket\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386592 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-nmstate-lock\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386702 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386725 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wcwd\" (UniqueName: \"kubernetes.io/projected/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-kube-api-access-5wcwd\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386744 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k88k2\" (UniqueName: \"kubernetes.io/projected/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-kube-api-access-k88k2\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386796 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-ovs-socket\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386817 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386929 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-nmstate-lock\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386927 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-dbus-socket\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.386961 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-ovs-socket\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.424023 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k88k2\" (UniqueName: \"kubernetes.io/projected/ae053ba9-b3d6-427d-b0e4-88e11ef2ba71-kube-api-access-k88k2\") pod \"nmstate-handler-mmsmp\" (UID: \"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71\") " pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.474640 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.489740 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.490027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wcwd\" (UniqueName: \"kubernetes.io/projected/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-kube-api-access-5wcwd\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.490163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: E0319 17:00:03.490420 4792 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 19 17:00:03 crc kubenswrapper[4792]: E0319 17:00:03.490534 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-plugin-serving-cert podName:c138fbf0-cc91-4f17-913e-87f6f4fcbbe8 nodeName:}" failed. No retries permitted until 2026-03-19 17:00:03.990519527 +0000 UTC m=+1167.136577067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-b7gwq" (UID: "c138fbf0-cc91-4f17-913e-87f6f4fcbbe8") : secret "plugin-serving-cert" not found Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.491707 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.546557 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wcwd\" (UniqueName: \"kubernetes.io/projected/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-kube-api-access-5wcwd\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.582533 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5dc4d84b9d-c6zml"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.583823 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.611751 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dc4d84b9d-c6zml"] Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.695529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-oauth-serving-cert\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.695747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9hb2\" (UniqueName: \"kubernetes.io/projected/36d88883-fb51-4f6b-9d19-f7e312f3a9af-kube-api-access-q9hb2\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.695894 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-service-ca\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.695990 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-trusted-ca-bundle\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.696077 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-serving-cert\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.696150 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-oauth-config\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.696264 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-config\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.798255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-config\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.799533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-oauth-serving-cert\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.799643 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9hb2\" (UniqueName: \"kubernetes.io/projected/36d88883-fb51-4f6b-9d19-f7e312f3a9af-kube-api-access-q9hb2\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.799729 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-sjth6\" (UID: \"9d86fdf3-73d9-48f7-b44f-6182252fc4f8\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.799810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-service-ca\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.801744 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-trusted-ca-bundle\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.801885 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-serving-cert\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.801981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-oauth-config\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.799460 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-config\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.800577 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-service-ca\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.800857 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-oauth-serving-cert\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.803421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-trusted-ca-bundle\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.806300 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-serving-cert\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.814355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d86fdf3-73d9-48f7-b44f-6182252fc4f8-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-sjth6\" (UID: \"9d86fdf3-73d9-48f7-b44f-6182252fc4f8\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.822716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-oauth-config\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.822932 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9hb2\" (UniqueName: \"kubernetes.io/projected/36d88883-fb51-4f6b-9d19-f7e312f3a9af-kube-api-access-q9hb2\") pod \"console-5dc4d84b9d-c6zml\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.886266 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.933130 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:03 crc kubenswrapper[4792]: I0319 17:00:03.985272 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.006109 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be29048b-fc27-449a-abb9-b15ef75ca132-secret-volume\") pod \"be29048b-fc27-449a-abb9-b15ef75ca132\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.006156 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv2gt\" (UniqueName: \"kubernetes.io/projected/be29048b-fc27-449a-abb9-b15ef75ca132-kube-api-access-wv2gt\") pod \"be29048b-fc27-449a-abb9-b15ef75ca132\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.006319 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be29048b-fc27-449a-abb9-b15ef75ca132-config-volume\") pod \"be29048b-fc27-449a-abb9-b15ef75ca132\" (UID: \"be29048b-fc27-449a-abb9-b15ef75ca132\") " Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.006571 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.007587 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be29048b-fc27-449a-abb9-b15ef75ca132-config-volume" (OuterVolumeSpecName: "config-volume") pod "be29048b-fc27-449a-abb9-b15ef75ca132" (UID: "be29048b-fc27-449a-abb9-b15ef75ca132"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.010455 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be29048b-fc27-449a-abb9-b15ef75ca132-kube-api-access-wv2gt" (OuterVolumeSpecName: "kube-api-access-wv2gt") pod "be29048b-fc27-449a-abb9-b15ef75ca132" (UID: "be29048b-fc27-449a-abb9-b15ef75ca132"). InnerVolumeSpecName "kube-api-access-wv2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.011109 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c138fbf0-cc91-4f17-913e-87f6f4fcbbe8-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-b7gwq\" (UID: \"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.012975 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be29048b-fc27-449a-abb9-b15ef75ca132-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "be29048b-fc27-449a-abb9-b15ef75ca132" (UID: "be29048b-fc27-449a-abb9-b15ef75ca132"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.070048 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-954lx"] Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.108272 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be29048b-fc27-449a-abb9-b15ef75ca132-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.108306 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv2gt\" (UniqueName: \"kubernetes.io/projected/be29048b-fc27-449a-abb9-b15ef75ca132-kube-api-access-wv2gt\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.108316 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be29048b-fc27-449a-abb9-b15ef75ca132-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.176687 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dc4d84b9d-c6zml"] Mar 19 17:00:04 crc kubenswrapper[4792]: W0319 17:00:04.179405 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36d88883_fb51_4f6b_9d19_f7e312f3a9af.slice/crio-db84af2788cd3b370b8aca56ccc401f44e8286d693617e12a5eb8a3fe57f0319 WatchSource:0}: Error finding container db84af2788cd3b370b8aca56ccc401f44e8286d693617e12a5eb8a3fe57f0319: Status 404 returned error can't find the container with id db84af2788cd3b370b8aca56ccc401f44e8286d693617e12a5eb8a3fe57f0319 Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.266660 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.446487 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-sjth6"] Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.483520 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dc4d84b9d-c6zml" event={"ID":"36d88883-fb51-4f6b-9d19-f7e312f3a9af","Type":"ContainerStarted","Data":"f11d1ffdadbf8cd04905518f77efc8605e1ff57362e3bd7de1516e3beea87adf"} Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.483564 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dc4d84b9d-c6zml" event={"ID":"36d88883-fb51-4f6b-9d19-f7e312f3a9af","Type":"ContainerStarted","Data":"db84af2788cd3b370b8aca56ccc401f44e8286d693617e12a5eb8a3fe57f0319"} Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.484615 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" event={"ID":"4ab3aea4-d9a5-42e1-9c73-435f5c722cbb","Type":"ContainerStarted","Data":"2c093710bc5dc29722b1fa843429fef0b8eedba9391d09d515155804b140b8d7"} Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.486260 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.488953 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt" event={"ID":"be29048b-fc27-449a-abb9-b15ef75ca132","Type":"ContainerDied","Data":"4ff92b507e1e795fa91d24149f28d70b01a232b8545a3db9d9112878faee69d5"} Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.488987 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ff92b507e1e795fa91d24149f28d70b01a232b8545a3db9d9112878faee69d5" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.496377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" event={"ID":"9d86fdf3-73d9-48f7-b44f-6182252fc4f8","Type":"ContainerStarted","Data":"67fa662a5ae38fda7fe7ad79b9e78a72a6295e2dbb694e24a0cd8bfcbbbc2ec7"} Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.500199 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mmsmp" event={"ID":"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71","Type":"ContainerStarted","Data":"85f07256aa24a000e16456094e2da81478dc9c225d8916a9a0fc0c6b31e05eee"} Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.507626 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5dc4d84b9d-c6zml" podStartSLOduration=1.50760443 podStartE2EDuration="1.50760443s" podCreationTimestamp="2026-03-19 17:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:00:04.501112074 +0000 UTC m=+1167.647169624" watchObservedRunningTime="2026-03-19 17:00:04.50760443 +0000 UTC m=+1167.653661960" Mar 19 17:00:04 crc kubenswrapper[4792]: I0319 17:00:04.666414 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq"] Mar 19 17:00:05 crc kubenswrapper[4792]: I0319 17:00:05.514622 4792 generic.go:334] "Generic (PLEG): container finished" podID="86f927d1-51a8-41d6-a503-1967b4fd9561" containerID="aa8700972a4cfdcc197ed3f3051a23c9b3d30c93ae41668692c66ec3a83b6958" exitCode=0 Mar 19 17:00:05 crc kubenswrapper[4792]: I0319 17:00:05.514680 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565660-q64d4" event={"ID":"86f927d1-51a8-41d6-a503-1967b4fd9561","Type":"ContainerDied","Data":"aa8700972a4cfdcc197ed3f3051a23c9b3d30c93ae41668692c66ec3a83b6958"} Mar 19 17:00:05 crc kubenswrapper[4792]: I0319 17:00:05.517127 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" event={"ID":"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8","Type":"ContainerStarted","Data":"996cb42a59c9299c21e10ec36987f65a01efd8b516c4eb84003193c3cc5532d0"} Mar 19 17:00:06 crc kubenswrapper[4792]: I0319 17:00:06.864799 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565660-q64d4" Mar 19 17:00:07 crc kubenswrapper[4792]: I0319 17:00:07.060123 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5gxt\" (UniqueName: \"kubernetes.io/projected/86f927d1-51a8-41d6-a503-1967b4fd9561-kube-api-access-z5gxt\") pod \"86f927d1-51a8-41d6-a503-1967b4fd9561\" (UID: \"86f927d1-51a8-41d6-a503-1967b4fd9561\") " Mar 19 17:00:07 crc kubenswrapper[4792]: I0319 17:00:07.080108 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f927d1-51a8-41d6-a503-1967b4fd9561-kube-api-access-z5gxt" (OuterVolumeSpecName: "kube-api-access-z5gxt") pod "86f927d1-51a8-41d6-a503-1967b4fd9561" (UID: "86f927d1-51a8-41d6-a503-1967b4fd9561"). InnerVolumeSpecName "kube-api-access-z5gxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:00:07 crc kubenswrapper[4792]: I0319 17:00:07.163373 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5gxt\" (UniqueName: \"kubernetes.io/projected/86f927d1-51a8-41d6-a503-1967b4fd9561-kube-api-access-z5gxt\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:07 crc kubenswrapper[4792]: I0319 17:00:07.552468 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565660-q64d4" event={"ID":"86f927d1-51a8-41d6-a503-1967b4fd9561","Type":"ContainerDied","Data":"32854c6bdada7a0b80beacf875069eb09265c13b4f68b4427118298ea60bfeca"} Mar 19 17:00:07 crc kubenswrapper[4792]: I0319 17:00:07.552511 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565660-q64d4" Mar 19 17:00:07 crc kubenswrapper[4792]: I0319 17:00:07.552520 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32854c6bdada7a0b80beacf875069eb09265c13b4f68b4427118298ea60bfeca" Mar 19 17:00:07 crc kubenswrapper[4792]: I0319 17:00:07.916183 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565654-769pd"] Mar 19 17:00:07 crc kubenswrapper[4792]: I0319 17:00:07.923536 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565654-769pd"] Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.574963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mmsmp" event={"ID":"ae053ba9-b3d6-427d-b0e4-88e11ef2ba71","Type":"ContainerStarted","Data":"a55b97fac80a887c8c4fc528e38da9fb798e6c9b3bc11442dc8dde3f27186a2b"} Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.575742 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.578797 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" event={"ID":"c138fbf0-cc91-4f17-913e-87f6f4fcbbe8","Type":"ContainerStarted","Data":"705cdaba1ab86d86bfed7f64972731c61d9ae2d924cfe34c8767ed62b1e532e2"} Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.581477 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" event={"ID":"4ab3aea4-d9a5-42e1-9c73-435f5c722cbb","Type":"ContainerStarted","Data":"fadb46e7467cc01437cf431d85f717c2af1487083f417d8df3257456f65b34a6"} Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.583626 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" event={"ID":"9d86fdf3-73d9-48f7-b44f-6182252fc4f8","Type":"ContainerStarted","Data":"5376fea894e7f9303d981b114f9d27a55c594205cf21d8972cdc736d5d5e29fb"} Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.584447 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.599940 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mmsmp" podStartSLOduration=1.794008437 podStartE2EDuration="6.599893652s" podCreationTimestamp="2026-03-19 17:00:03 +0000 UTC" firstStartedPulling="2026-03-19 17:00:03.526707568 +0000 UTC m=+1166.672765108" lastFinishedPulling="2026-03-19 17:00:08.332592783 +0000 UTC m=+1171.478650323" observedRunningTime="2026-03-19 17:00:09.593039566 +0000 UTC m=+1172.739097106" watchObservedRunningTime="2026-03-19 17:00:09.599893652 +0000 UTC m=+1172.745951212" Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.610545 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-b7gwq" podStartSLOduration=2.967575005 podStartE2EDuration="6.610523131s" podCreationTimestamp="2026-03-19 17:00:03 +0000 UTC" firstStartedPulling="2026-03-19 17:00:04.679866742 +0000 UTC m=+1167.825924282" lastFinishedPulling="2026-03-19 17:00:08.322814868 +0000 UTC m=+1171.468872408" observedRunningTime="2026-03-19 17:00:09.608875665 +0000 UTC m=+1172.754933215" watchObservedRunningTime="2026-03-19 17:00:09.610523131 +0000 UTC m=+1172.756580661" Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.627433 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" podStartSLOduration=2.7623299980000002 podStartE2EDuration="6.627417789s" podCreationTimestamp="2026-03-19 17:00:03 +0000 UTC" firstStartedPulling="2026-03-19 17:00:04.458617911 +0000 UTC m=+1167.604675451" lastFinishedPulling="2026-03-19 17:00:08.323705712 +0000 UTC m=+1171.469763242" observedRunningTime="2026-03-19 17:00:09.625527917 +0000 UTC m=+1172.771585457" watchObservedRunningTime="2026-03-19 17:00:09.627417789 +0000 UTC m=+1172.773475329" Mar 19 17:00:09 crc kubenswrapper[4792]: I0319 17:00:09.748685 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15038476-e48d-4bb9-b67f-928eb93e7c18" path="/var/lib/kubelet/pods/15038476-e48d-4bb9-b67f-928eb93e7c18/volumes" Mar 19 17:00:13 crc kubenswrapper[4792]: I0319 17:00:13.509086 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mmsmp" Mar 19 17:00:13 crc kubenswrapper[4792]: I0319 17:00:13.624553 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" event={"ID":"4ab3aea4-d9a5-42e1-9c73-435f5c722cbb","Type":"ContainerStarted","Data":"6614da177581f4798856ea904e7b736355e7ab3d4e6700dfcbfeb8f365a63407"} Mar 19 17:00:13 crc kubenswrapper[4792]: I0319 17:00:13.643859 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-954lx" podStartSLOduration=1.7155259790000001 podStartE2EDuration="10.643819542s" podCreationTimestamp="2026-03-19 17:00:03 +0000 UTC" firstStartedPulling="2026-03-19 17:00:04.092432411 +0000 UTC m=+1167.238489951" lastFinishedPulling="2026-03-19 17:00:13.020725974 +0000 UTC m=+1176.166783514" observedRunningTime="2026-03-19 17:00:13.638398045 +0000 UTC m=+1176.784455595" watchObservedRunningTime="2026-03-19 17:00:13.643819542 +0000 UTC m=+1176.789877102" Mar 19 17:00:13 crc kubenswrapper[4792]: I0319 17:00:13.934586 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:13 crc kubenswrapper[4792]: I0319 17:00:13.934640 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:13 crc kubenswrapper[4792]: I0319 17:00:13.938996 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:14 crc kubenswrapper[4792]: I0319 17:00:14.633130 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:00:14 crc kubenswrapper[4792]: I0319 17:00:14.688200 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c589d8dc4-d7wtg"] Mar 19 17:00:20 crc kubenswrapper[4792]: I0319 17:00:20.230425 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:00:20 crc kubenswrapper[4792]: I0319 17:00:20.231136 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:00:23 crc kubenswrapper[4792]: I0319 17:00:23.993540 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 17:00:39 crc kubenswrapper[4792]: I0319 17:00:39.740246 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7c589d8dc4-d7wtg" podUID="ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" containerName="console" containerID="cri-o://77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b" gracePeriod=15 Mar 19 17:00:39 crc kubenswrapper[4792]: I0319 17:00:39.947545 4792 scope.go:117] "RemoveContainer" containerID="80b82ade50533f6bd36db175c7aa9476ab9558021e27cf69c36e04034331fb71" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.164362 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c589d8dc4-d7wtg_ca3b391a-e070-44ac-a8ba-ff5e89ed4c65/console/0.log" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.164439 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.216606 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-serving-cert\") pod \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.216689 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk5x2\" (UniqueName: \"kubernetes.io/projected/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-kube-api-access-dk5x2\") pod \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.216730 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-oauth-serving-cert\") pod \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.216758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-service-ca\") pod \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.216780 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-oauth-config\") pod \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.218062 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" (UID: "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.221794 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-service-ca" (OuterVolumeSpecName: "service-ca") pod "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" (UID: "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.223020 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" (UID: "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.223071 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" (UID: "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.232992 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-kube-api-access-dk5x2" (OuterVolumeSpecName: "kube-api-access-dk5x2") pod "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" (UID: "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65"). InnerVolumeSpecName "kube-api-access-dk5x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.317608 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-trusted-ca-bundle\") pod \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.317644 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-config\") pod \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\" (UID: \"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65\") " Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.317911 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.317921 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.317930 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.317940 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.317948 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk5x2\" (UniqueName: \"kubernetes.io/projected/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-kube-api-access-dk5x2\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.318136 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" (UID: "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.318168 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-config" (OuterVolumeSpecName: "console-config") pod "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" (UID: "ca3b391a-e070-44ac-a8ba-ff5e89ed4c65"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.419322 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.419664 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65-console-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.831360 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c589d8dc4-d7wtg_ca3b391a-e070-44ac-a8ba-ff5e89ed4c65/console/0.log" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.831408 4792 generic.go:334] "Generic (PLEG): container finished" podID="ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" containerID="77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b" exitCode=2 Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.831442 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c589d8dc4-d7wtg" event={"ID":"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65","Type":"ContainerDied","Data":"77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b"} Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.831472 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c589d8dc4-d7wtg" event={"ID":"ca3b391a-e070-44ac-a8ba-ff5e89ed4c65","Type":"ContainerDied","Data":"76e8779f8e5577118fdc0d90a8a5652be07103c5a6079089d7406303db9d715c"} Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.831489 4792 scope.go:117] "RemoveContainer" containerID="77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.831491 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c589d8dc4-d7wtg" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.860678 4792 scope.go:117] "RemoveContainer" containerID="77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b" Mar 19 17:00:40 crc kubenswrapper[4792]: E0319 17:00:40.861221 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b\": container with ID starting with 77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b not found: ID does not exist" containerID="77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.861266 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b"} err="failed to get container status \"77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b\": rpc error: code = NotFound desc = could not find container \"77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b\": container with ID starting with 77cb16e4650d1451c2ba6758d0e4f0780641a3a2174853dc69d9e6c22b8aea9b not found: ID does not exist" Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.862760 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c589d8dc4-d7wtg"] Mar 19 17:00:40 crc kubenswrapper[4792]: I0319 17:00:40.869326 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7c589d8dc4-d7wtg"] Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.010955 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w"] Mar 19 17:00:41 crc kubenswrapper[4792]: E0319 17:00:41.011219 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f927d1-51a8-41d6-a503-1967b4fd9561" containerName="oc" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.011234 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f927d1-51a8-41d6-a503-1967b4fd9561" containerName="oc" Mar 19 17:00:41 crc kubenswrapper[4792]: E0319 17:00:41.011254 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" containerName="console" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.011261 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" containerName="console" Mar 19 17:00:41 crc kubenswrapper[4792]: E0319 17:00:41.011284 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be29048b-fc27-449a-abb9-b15ef75ca132" containerName="collect-profiles" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.011290 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="be29048b-fc27-449a-abb9-b15ef75ca132" containerName="collect-profiles" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.011415 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="be29048b-fc27-449a-abb9-b15ef75ca132" containerName="collect-profiles" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.011435 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f927d1-51a8-41d6-a503-1967b4fd9561" containerName="oc" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.011443 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" containerName="console" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.012374 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.020078 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.027919 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.028019 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.028092 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhbv\" (UniqueName: \"kubernetes.io/projected/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-kube-api-access-hwhbv\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.032272 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w"] Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.129152 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.129241 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.129303 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhbv\" (UniqueName: \"kubernetes.io/projected/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-kube-api-access-hwhbv\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.129942 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.130040 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.147057 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhbv\" (UniqueName: \"kubernetes.io/projected/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-kube-api-access-hwhbv\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.332246 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.737071 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w"] Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.750671 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3b391a-e070-44ac-a8ba-ff5e89ed4c65" path="/var/lib/kubelet/pods/ca3b391a-e070-44ac-a8ba-ff5e89ed4c65/volumes" Mar 19 17:00:41 crc kubenswrapper[4792]: I0319 17:00:41.840911 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" event={"ID":"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b","Type":"ContainerStarted","Data":"5aea671ecdae95736e639a17d35d4e7429d22bb921b945f4130368855c0903fd"} Mar 19 17:00:42 crc kubenswrapper[4792]: I0319 17:00:42.852057 4792 generic.go:334] "Generic (PLEG): container finished" podID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerID="33c67a16b33ff7e9a6699f60f4103c82f5d84132dc05e15d19df9ad12fc08ee7" exitCode=0 Mar 19 17:00:42 crc kubenswrapper[4792]: I0319 17:00:42.852316 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" event={"ID":"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b","Type":"ContainerDied","Data":"33c67a16b33ff7e9a6699f60f4103c82f5d84132dc05e15d19df9ad12fc08ee7"} Mar 19 17:00:42 crc kubenswrapper[4792]: I0319 17:00:42.854351 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:00:45 crc kubenswrapper[4792]: I0319 17:00:45.880014 4792 generic.go:334] "Generic (PLEG): container finished" podID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerID="88864a5df0cf3f40f900eb43e99a0954e2efa6463ab425c787062bc5ec8e0080" exitCode=0 Mar 19 17:00:45 crc kubenswrapper[4792]: I0319 17:00:45.880104 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" event={"ID":"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b","Type":"ContainerDied","Data":"88864a5df0cf3f40f900eb43e99a0954e2efa6463ab425c787062bc5ec8e0080"} Mar 19 17:00:46 crc kubenswrapper[4792]: I0319 17:00:46.894612 4792 generic.go:334] "Generic (PLEG): container finished" podID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerID="f44e606729df5fe25c89e208c3eab09100f5ff2ccf093f8baf488179e4b527bf" exitCode=0 Mar 19 17:00:46 crc kubenswrapper[4792]: I0319 17:00:46.894675 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" event={"ID":"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b","Type":"ContainerDied","Data":"f44e606729df5fe25c89e208c3eab09100f5ff2ccf093f8baf488179e4b527bf"} Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.207009 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.339291 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-util\") pod \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.339714 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwhbv\" (UniqueName: \"kubernetes.io/projected/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-kube-api-access-hwhbv\") pod \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.339813 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-bundle\") pod \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\" (UID: \"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b\") " Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.341556 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-bundle" (OuterVolumeSpecName: "bundle") pod "f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" (UID: "f7e02e6a-ee2d-4d53-a972-ddfaf33a218b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.346109 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-kube-api-access-hwhbv" (OuterVolumeSpecName: "kube-api-access-hwhbv") pod "f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" (UID: "f7e02e6a-ee2d-4d53-a972-ddfaf33a218b"). InnerVolumeSpecName "kube-api-access-hwhbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.350517 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-util" (OuterVolumeSpecName: "util") pod "f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" (UID: "f7e02e6a-ee2d-4d53-a972-ddfaf33a218b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.441220 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwhbv\" (UniqueName: \"kubernetes.io/projected/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-kube-api-access-hwhbv\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.441333 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.441347 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7e02e6a-ee2d-4d53-a972-ddfaf33a218b-util\") on node \"crc\" DevicePath \"\"" Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.911779 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" event={"ID":"f7e02e6a-ee2d-4d53-a972-ddfaf33a218b","Type":"ContainerDied","Data":"5aea671ecdae95736e639a17d35d4e7429d22bb921b945f4130368855c0903fd"} Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.911824 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aea671ecdae95736e639a17d35d4e7429d22bb921b945f4130368855c0903fd" Mar 19 17:00:48 crc kubenswrapper[4792]: I0319 17:00:48.911925 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w" Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.230737 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.230827 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.230952 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.231892 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ca4cbbd386f8a652ca27c6ccc22b2819570a7d2eee2b0dd08a6bf2c10bbac27"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.231984 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://9ca4cbbd386f8a652ca27c6ccc22b2819570a7d2eee2b0dd08a6bf2c10bbac27" gracePeriod=600 Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.928649 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="9ca4cbbd386f8a652ca27c6ccc22b2819570a7d2eee2b0dd08a6bf2c10bbac27" exitCode=0 Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.928694 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"9ca4cbbd386f8a652ca27c6ccc22b2819570a7d2eee2b0dd08a6bf2c10bbac27"} Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.929276 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"068a73beae621ae4f956b367fc3282b83e72642257a902caff5addac077ed9f3"} Mar 19 17:00:50 crc kubenswrapper[4792]: I0319 17:00:50.929302 4792 scope.go:117] "RemoveContainer" containerID="cccedd4b3574c81c38a56f329e598dc97a6d03867a548dcb7438ac401ae1edcb" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.853639 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7"] Mar 19 17:01:01 crc kubenswrapper[4792]: E0319 17:01:01.854577 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerName="extract" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.854595 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerName="extract" Mar 19 17:01:01 crc kubenswrapper[4792]: E0319 17:01:01.854610 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerName="util" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.854619 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerName="util" Mar 19 17:01:01 crc kubenswrapper[4792]: E0319 17:01:01.854636 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerName="pull" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.854644 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerName="pull" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.854864 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e02e6a-ee2d-4d53-a972-ddfaf33a218b" containerName="extract" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.856227 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.861784 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.861952 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.865163 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.867151 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.867207 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fjlzp" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.868436 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7"] Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.952177 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-apiservice-cert\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.952244 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jc8c\" (UniqueName: \"kubernetes.io/projected/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-kube-api-access-9jc8c\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:01 crc kubenswrapper[4792]: I0319 17:01:01.952315 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-webhook-cert\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.053758 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-apiservice-cert\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.054061 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jc8c\" (UniqueName: \"kubernetes.io/projected/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-kube-api-access-9jc8c\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.054180 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-webhook-cert\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.059726 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-apiservice-cert\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.065547 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-webhook-cert\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.078210 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jc8c\" (UniqueName: \"kubernetes.io/projected/e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76-kube-api-access-9jc8c\") pod \"metallb-operator-controller-manager-6c96bc4ccc-fw8z7\" (UID: \"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76\") " pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.104064 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w"] Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.106043 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.108769 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-rvlsk" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.109066 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.109887 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.124693 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w"] Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.177262 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.256902 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b613458-1b90-42f8-8d32-d3017f189770-webhook-cert\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.256965 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsk72\" (UniqueName: \"kubernetes.io/projected/4b613458-1b90-42f8-8d32-d3017f189770-kube-api-access-bsk72\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.257039 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b613458-1b90-42f8-8d32-d3017f189770-apiservice-cert\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.358296 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b613458-1b90-42f8-8d32-d3017f189770-apiservice-cert\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.358382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b613458-1b90-42f8-8d32-d3017f189770-webhook-cert\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.358416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsk72\" (UniqueName: \"kubernetes.io/projected/4b613458-1b90-42f8-8d32-d3017f189770-kube-api-access-bsk72\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.363510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b613458-1b90-42f8-8d32-d3017f189770-apiservice-cert\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.366604 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b613458-1b90-42f8-8d32-d3017f189770-webhook-cert\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.381533 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsk72\" (UniqueName: \"kubernetes.io/projected/4b613458-1b90-42f8-8d32-d3017f189770-kube-api-access-bsk72\") pod \"metallb-operator-webhook-server-8559bd9b58-4dc8w\" (UID: \"4b613458-1b90-42f8-8d32-d3017f189770\") " pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.453454 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.846819 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7"] Mar 19 17:01:02 crc kubenswrapper[4792]: I0319 17:01:02.923251 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w"] Mar 19 17:01:02 crc kubenswrapper[4792]: W0319 17:01:02.925947 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b613458_1b90_42f8_8d32_d3017f189770.slice/crio-2c627f5c94a11011852bfdc636ad7a1243d6903a134844b575794ebc54469314 WatchSource:0}: Error finding container 2c627f5c94a11011852bfdc636ad7a1243d6903a134844b575794ebc54469314: Status 404 returned error can't find the container with id 2c627f5c94a11011852bfdc636ad7a1243d6903a134844b575794ebc54469314 Mar 19 17:01:03 crc kubenswrapper[4792]: I0319 17:01:03.021556 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" event={"ID":"4b613458-1b90-42f8-8d32-d3017f189770","Type":"ContainerStarted","Data":"2c627f5c94a11011852bfdc636ad7a1243d6903a134844b575794ebc54469314"} Mar 19 17:01:03 crc kubenswrapper[4792]: I0319 17:01:03.022535 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" event={"ID":"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76","Type":"ContainerStarted","Data":"8fd666f36e72ede93c9c8f55ae9265bda1efece123feda486ef1711401eee370"} Mar 19 17:01:08 crc kubenswrapper[4792]: I0319 17:01:08.054733 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" event={"ID":"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76","Type":"ContainerStarted","Data":"a8cc45d614d01b79bc8226ff741d7f200f7dbd01e7ef867245c9c075c8aff53d"} Mar 19 17:01:08 crc kubenswrapper[4792]: I0319 17:01:08.056140 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:08 crc kubenswrapper[4792]: I0319 17:01:08.057673 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" event={"ID":"4b613458-1b90-42f8-8d32-d3017f189770","Type":"ContainerStarted","Data":"bc0eb405a9ef9a4e9d1c483fe644ed3cf4ae09c982591495e93b393fd714dc73"} Mar 19 17:01:08 crc kubenswrapper[4792]: I0319 17:01:08.058074 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:08 crc kubenswrapper[4792]: I0319 17:01:08.090598 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" podStartSLOduration=2.179758507 podStartE2EDuration="7.090576025s" podCreationTimestamp="2026-03-19 17:01:01 +0000 UTC" firstStartedPulling="2026-03-19 17:01:02.859143316 +0000 UTC m=+1226.005200856" lastFinishedPulling="2026-03-19 17:01:07.769960834 +0000 UTC m=+1230.916018374" observedRunningTime="2026-03-19 17:01:08.087714396 +0000 UTC m=+1231.233771946" watchObservedRunningTime="2026-03-19 17:01:08.090576025 +0000 UTC m=+1231.236633565" Mar 19 17:01:08 crc kubenswrapper[4792]: I0319 17:01:08.105701 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podStartSLOduration=1.253697635 podStartE2EDuration="6.105679469s" podCreationTimestamp="2026-03-19 17:01:02 +0000 UTC" firstStartedPulling="2026-03-19 17:01:02.929041593 +0000 UTC m=+1226.075099133" lastFinishedPulling="2026-03-19 17:01:07.781023427 +0000 UTC m=+1230.927080967" observedRunningTime="2026-03-19 17:01:08.102218214 +0000 UTC m=+1231.248275784" watchObservedRunningTime="2026-03-19 17:01:08.105679469 +0000 UTC m=+1231.251737009" Mar 19 17:01:22 crc kubenswrapper[4792]: I0319 17:01:22.460102 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.179926 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.819669 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-nd7zd"] Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.822717 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.824397 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.824403 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.831415 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zzpll" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.860388 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h"] Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.861952 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.862807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-conf\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.862934 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30ef8aea-daf2-4351-bf36-a8238738129a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-kzh4h\" (UID: \"30ef8aea-daf2-4351-bf36-a8238738129a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.862961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wbx\" (UniqueName: \"kubernetes.io/projected/81f1b6c9-e921-49a2-8149-767fe360d7d0-kube-api-access-p8wbx\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.863002 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-startup\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.863236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.863281 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-sockets\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.863335 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-reloader\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.863454 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics-certs\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.863664 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjt7\" (UniqueName: \"kubernetes.io/projected/30ef8aea-daf2-4351-bf36-a8238738129a-kube-api-access-kdjt7\") pod \"frr-k8s-webhook-server-bcc4b6f68-kzh4h\" (UID: \"30ef8aea-daf2-4351-bf36-a8238738129a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.866344 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.880898 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h"] Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.942612 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6cld2"] Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.944068 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6cld2" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.946249 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.946458 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-w9rsq" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.948088 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.948255 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.960179 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-gdvnw"] Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.961362 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.963277 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964602 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gdkx\" (UniqueName: \"kubernetes.io/projected/ee375e3b-1376-4cd4-93b7-da4316b203a7-kube-api-access-2gdkx\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964658 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-cert\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964692 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-conf\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964729 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30ef8aea-daf2-4351-bf36-a8238738129a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-kzh4h\" (UID: \"30ef8aea-daf2-4351-bf36-a8238738129a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964755 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964776 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8wbx\" (UniqueName: \"kubernetes.io/projected/81f1b6c9-e921-49a2-8149-767fe360d7d0-kube-api-access-p8wbx\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964802 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-startup\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964827 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964871 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee375e3b-1376-4cd4-93b7-da4316b203a7-metallb-excludel2\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.964897 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-sockets\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.965113 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plg95\" (UniqueName: \"kubernetes.io/projected/69f67eea-c8b3-40a4-891a-4c15c31cb410-kube-api-access-plg95\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.965141 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-reloader\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.965182 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-metrics-certs\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.965226 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics-certs\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.965263 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-metrics-certs\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.965307 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdjt7\" (UniqueName: \"kubernetes.io/projected/30ef8aea-daf2-4351-bf36-a8238738129a-kube-api-access-kdjt7\") pod \"frr-k8s-webhook-server-bcc4b6f68-kzh4h\" (UID: \"30ef8aea-daf2-4351-bf36-a8238738129a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.966077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-conf\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.967309 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-sockets\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.968262 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.968559 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81f1b6c9-e921-49a2-8149-767fe360d7d0-reloader\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: E0319 17:01:42.968671 4792 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 19 17:01:42 crc kubenswrapper[4792]: E0319 17:01:42.968731 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics-certs podName:81f1b6c9-e921-49a2-8149-767fe360d7d0 nodeName:}" failed. No retries permitted until 2026-03-19 17:01:43.46871162 +0000 UTC m=+1266.614769250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics-certs") pod "frr-k8s-nd7zd" (UID: "81f1b6c9-e921-49a2-8149-767fe360d7d0") : secret "frr-k8s-certs-secret" not found Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.969523 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-gdvnw"] Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.972362 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30ef8aea-daf2-4351-bf36-a8238738129a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-kzh4h\" (UID: \"30ef8aea-daf2-4351-bf36-a8238738129a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.974069 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81f1b6c9-e921-49a2-8149-767fe360d7d0-frr-startup\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:42 crc kubenswrapper[4792]: I0319 17:01:42.990700 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdjt7\" (UniqueName: \"kubernetes.io/projected/30ef8aea-daf2-4351-bf36-a8238738129a-kube-api-access-kdjt7\") pod \"frr-k8s-webhook-server-bcc4b6f68-kzh4h\" (UID: \"30ef8aea-daf2-4351-bf36-a8238738129a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.014527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8wbx\" (UniqueName: \"kubernetes.io/projected/81f1b6c9-e921-49a2-8149-767fe360d7d0-kube-api-access-p8wbx\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.084446 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-metrics-certs\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.084882 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gdkx\" (UniqueName: \"kubernetes.io/projected/ee375e3b-1376-4cd4-93b7-da4316b203a7-kube-api-access-2gdkx\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.084927 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-cert\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.085011 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.085252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee375e3b-1376-4cd4-93b7-da4316b203a7-metallb-excludel2\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.085327 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plg95\" (UniqueName: \"kubernetes.io/projected/69f67eea-c8b3-40a4-891a-4c15c31cb410-kube-api-access-plg95\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.085405 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-metrics-certs\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:43 crc kubenswrapper[4792]: E0319 17:01:43.085656 4792 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 19 17:01:43 crc kubenswrapper[4792]: E0319 17:01:43.085754 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-metrics-certs podName:69f67eea-c8b3-40a4-891a-4c15c31cb410 nodeName:}" failed. No retries permitted until 2026-03-19 17:01:43.585732448 +0000 UTC m=+1266.731789988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-metrics-certs") pod "controller-7bb4cc7c98-gdvnw" (UID: "69f67eea-c8b3-40a4-891a-4c15c31cb410") : secret "controller-certs-secret" not found Mar 19 17:01:43 crc kubenswrapper[4792]: E0319 17:01:43.086113 4792 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 19 17:01:43 crc kubenswrapper[4792]: E0319 17:01:43.086158 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist podName:ee375e3b-1376-4cd4-93b7-da4316b203a7 nodeName:}" failed. No retries permitted until 2026-03-19 17:01:43.586143869 +0000 UTC m=+1266.732201409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist") pod "speaker-6cld2" (UID: "ee375e3b-1376-4cd4-93b7-da4316b203a7") : secret "metallb-memberlist" not found Mar 19 17:01:43 crc kubenswrapper[4792]: E0319 17:01:43.086697 4792 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 19 17:01:43 crc kubenswrapper[4792]: E0319 17:01:43.086800 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-metrics-certs podName:ee375e3b-1376-4cd4-93b7-da4316b203a7 nodeName:}" failed. No retries permitted until 2026-03-19 17:01:43.586777376 +0000 UTC m=+1266.732834916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-metrics-certs") pod "speaker-6cld2" (UID: "ee375e3b-1376-4cd4-93b7-da4316b203a7") : secret "speaker-certs-secret" not found Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.087982 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee375e3b-1376-4cd4-93b7-da4316b203a7-metallb-excludel2\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.091975 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.115461 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-cert\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.115603 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gdkx\" (UniqueName: \"kubernetes.io/projected/ee375e3b-1376-4cd4-93b7-da4316b203a7-kube-api-access-2gdkx\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.119460 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plg95\" (UniqueName: \"kubernetes.io/projected/69f67eea-c8b3-40a4-891a-4c15c31cb410-kube-api-access-plg95\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.188342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.492740 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics-certs\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.498203 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f1b6c9-e921-49a2-8149-767fe360d7d0-metrics-certs\") pod \"frr-k8s-nd7zd\" (UID: \"81f1b6c9-e921-49a2-8149-767fe360d7d0\") " pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.594976 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.595051 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-metrics-certs\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.595102 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-metrics-certs\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: E0319 17:01:43.595155 4792 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 19 17:01:43 crc kubenswrapper[4792]: E0319 17:01:43.595226 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist podName:ee375e3b-1376-4cd4-93b7-da4316b203a7 nodeName:}" failed. No retries permitted until 2026-03-19 17:01:44.595206601 +0000 UTC m=+1267.741264141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist") pod "speaker-6cld2" (UID: "ee375e3b-1376-4cd4-93b7-da4316b203a7") : secret "metallb-memberlist" not found Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.599956 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-metrics-certs\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.600479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69f67eea-c8b3-40a4-891a-4c15c31cb410-metrics-certs\") pod \"controller-7bb4cc7c98-gdvnw\" (UID: \"69f67eea-c8b3-40a4-891a-4c15c31cb410\") " pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.608245 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h"] Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.688149 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.734702 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" event={"ID":"30ef8aea-daf2-4351-bf36-a8238738129a","Type":"ContainerStarted","Data":"a31de106fe2ed78b35533928a9ca6448b9e6112f7be37ca56883bc32339aba18"} Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.744301 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:43 crc kubenswrapper[4792]: I0319 17:01:43.948598 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-gdvnw"] Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.612081 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.622577 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee375e3b-1376-4cd4-93b7-da4316b203a7-memberlist\") pod \"speaker-6cld2\" (UID: \"ee375e3b-1376-4cd4-93b7-da4316b203a7\") " pod="metallb-system/speaker-6cld2" Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.743885 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"f2b8a91edb5f63d595be65c5fc9e7936753429c5c80146b1e83565a8070edd4e"} Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.745816 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-gdvnw" event={"ID":"69f67eea-c8b3-40a4-891a-4c15c31cb410","Type":"ContainerStarted","Data":"b64273c89aa06d9cdc86abd48f4271192144bc35e29bb2c1d3d28b459bb35aaa"} Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.745878 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-gdvnw" event={"ID":"69f67eea-c8b3-40a4-891a-4c15c31cb410","Type":"ContainerStarted","Data":"9084d0decc4897eebd211030b6964c9ee4729368295671d7b1e7852dfcb5911c"} Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.745892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-gdvnw" event={"ID":"69f67eea-c8b3-40a4-891a-4c15c31cb410","Type":"ContainerStarted","Data":"5347f87707e3fa6d9843bd8a89606c457a3f3586e1c50a07880077cde33a6aec"} Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.745974 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.766004 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-gdvnw" podStartSLOduration=2.765986112 podStartE2EDuration="2.765986112s" podCreationTimestamp="2026-03-19 17:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:01:44.758952541 +0000 UTC m=+1267.905010121" watchObservedRunningTime="2026-03-19 17:01:44.765986112 +0000 UTC m=+1267.912043652" Mar 19 17:01:44 crc kubenswrapper[4792]: I0319 17:01:44.787447 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6cld2" Mar 19 17:01:44 crc kubenswrapper[4792]: W0319 17:01:44.807990 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee375e3b_1376_4cd4_93b7_da4316b203a7.slice/crio-f023b519db530aba7ff1f2cc41c655b5076493d4958cc604d88f382d8ee201d8 WatchSource:0}: Error finding container f023b519db530aba7ff1f2cc41c655b5076493d4958cc604d88f382d8ee201d8: Status 404 returned error can't find the container with id f023b519db530aba7ff1f2cc41c655b5076493d4958cc604d88f382d8ee201d8 Mar 19 17:01:45 crc kubenswrapper[4792]: I0319 17:01:45.760910 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6cld2" event={"ID":"ee375e3b-1376-4cd4-93b7-da4316b203a7","Type":"ContainerStarted","Data":"9096d1019e4e8563480694acd965d19d9042f234fd57dd77da00fc300f0f2fb1"} Mar 19 17:01:45 crc kubenswrapper[4792]: I0319 17:01:45.761196 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6cld2" event={"ID":"ee375e3b-1376-4cd4-93b7-da4316b203a7","Type":"ContainerStarted","Data":"587cdf040aa5503847d573c1f36fd95a324761cea51b0bb7a748561f3c3e0d5e"} Mar 19 17:01:45 crc kubenswrapper[4792]: I0319 17:01:45.761206 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6cld2" event={"ID":"ee375e3b-1376-4cd4-93b7-da4316b203a7","Type":"ContainerStarted","Data":"f023b519db530aba7ff1f2cc41c655b5076493d4958cc604d88f382d8ee201d8"} Mar 19 17:01:45 crc kubenswrapper[4792]: I0319 17:01:45.761798 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6cld2" Mar 19 17:01:45 crc kubenswrapper[4792]: I0319 17:01:45.782587 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6cld2" podStartSLOduration=3.7825697529999998 podStartE2EDuration="3.782569753s" podCreationTimestamp="2026-03-19 17:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:01:45.775073888 +0000 UTC m=+1268.921131428" watchObservedRunningTime="2026-03-19 17:01:45.782569753 +0000 UTC m=+1268.928627293" Mar 19 17:01:53 crc kubenswrapper[4792]: I0319 17:01:53.829280 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" event={"ID":"30ef8aea-daf2-4351-bf36-a8238738129a","Type":"ContainerStarted","Data":"abf1058326df618e831edba83d3443b0140840c29518b5c50339ff8946897506"} Mar 19 17:01:53 crc kubenswrapper[4792]: I0319 17:01:53.829831 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:01:53 crc kubenswrapper[4792]: I0319 17:01:53.833297 4792 generic.go:334] "Generic (PLEG): container finished" podID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerID="99cb278e5b16c0375aca07293ae1ccc16fa40a17fd121c316c521040b9ded812" exitCode=0 Mar 19 17:01:53 crc kubenswrapper[4792]: I0319 17:01:53.833341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerDied","Data":"99cb278e5b16c0375aca07293ae1ccc16fa40a17fd121c316c521040b9ded812"} Mar 19 17:01:53 crc kubenswrapper[4792]: I0319 17:01:53.851548 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podStartSLOduration=2.296054357 podStartE2EDuration="11.851528608s" podCreationTimestamp="2026-03-19 17:01:42 +0000 UTC" firstStartedPulling="2026-03-19 17:01:43.614010993 +0000 UTC m=+1266.760068533" lastFinishedPulling="2026-03-19 17:01:53.169485234 +0000 UTC m=+1276.315542784" observedRunningTime="2026-03-19 17:01:53.848582998 +0000 UTC m=+1276.994640548" watchObservedRunningTime="2026-03-19 17:01:53.851528608 +0000 UTC m=+1276.997586168" Mar 19 17:01:54 crc kubenswrapper[4792]: I0319 17:01:54.842057 4792 generic.go:334] "Generic (PLEG): container finished" podID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerID="02c490aca5a5ebf5d0be50e350b0d43fb2a738a105a9b6daea4c30f537cc8c49" exitCode=0 Mar 19 17:01:54 crc kubenswrapper[4792]: I0319 17:01:54.842117 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerDied","Data":"02c490aca5a5ebf5d0be50e350b0d43fb2a738a105a9b6daea4c30f537cc8c49"} Mar 19 17:01:55 crc kubenswrapper[4792]: I0319 17:01:55.853968 4792 generic.go:334] "Generic (PLEG): container finished" podID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerID="a3a352f5b2caa8d818ce3dd2c72d2c8b777e7f9af427f9496947baf242f3f323" exitCode=0 Mar 19 17:01:55 crc kubenswrapper[4792]: I0319 17:01:55.854044 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerDied","Data":"a3a352f5b2caa8d818ce3dd2c72d2c8b777e7f9af427f9496947baf242f3f323"} Mar 19 17:01:56 crc kubenswrapper[4792]: I0319 17:01:56.869734 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"7d2104d3de41229b689f2de9e8405924d0025aca724ca746d0a498f5c3fe29d7"} Mar 19 17:01:56 crc kubenswrapper[4792]: I0319 17:01:56.869784 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"42fbbaaa9d6a87fc5026743ab48a54e3cae6c8ff365ca638c8b481afa54c4dc1"} Mar 19 17:01:56 crc kubenswrapper[4792]: I0319 17:01:56.869794 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"d93fb03e75bcd0620bd2076735095088458fa427c13473f366935f3bc1088f5a"} Mar 19 17:01:56 crc kubenswrapper[4792]: I0319 17:01:56.869802 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"02eb8fb242dc30e4be78084af24230af6458e148a8624d1b640478ba6aa93114"} Mar 19 17:01:56 crc kubenswrapper[4792]: I0319 17:01:56.869812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"07e97d4683994fd5540dacaed43fa860b7b75698934b678d403fa90bb02d62af"} Mar 19 17:01:57 crc kubenswrapper[4792]: I0319 17:01:57.886250 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"cec345f29f60bcd81e641ef626fb0c75354538c63e304bcea04917a354da8282"} Mar 19 17:01:57 crc kubenswrapper[4792]: I0319 17:01:57.887392 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:57 crc kubenswrapper[4792]: I0319 17:01:57.925272 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-nd7zd" podStartSLOduration=6.73166561 podStartE2EDuration="15.925255769s" podCreationTimestamp="2026-03-19 17:01:42 +0000 UTC" firstStartedPulling="2026-03-19 17:01:43.95078389 +0000 UTC m=+1267.096841430" lastFinishedPulling="2026-03-19 17:01:53.144374049 +0000 UTC m=+1276.290431589" observedRunningTime="2026-03-19 17:01:57.918917917 +0000 UTC m=+1281.064975487" watchObservedRunningTime="2026-03-19 17:01:57.925255769 +0000 UTC m=+1281.071313309" Mar 19 17:01:58 crc kubenswrapper[4792]: I0319 17:01:58.745507 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:01:58 crc kubenswrapper[4792]: I0319 17:01:58.782487 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.555212 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565662-bdzvt"] Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.556773 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565662-bdzvt" Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.559189 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.559211 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.559427 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.563726 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565662-bdzvt"] Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.647323 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plm7\" (UniqueName: \"kubernetes.io/projected/02835fca-719f-4bb9-8124-624a8fc2c074-kube-api-access-8plm7\") pod \"auto-csr-approver-29565662-bdzvt\" (UID: \"02835fca-719f-4bb9-8124-624a8fc2c074\") " pod="openshift-infra/auto-csr-approver-29565662-bdzvt" Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.748352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8plm7\" (UniqueName: \"kubernetes.io/projected/02835fca-719f-4bb9-8124-624a8fc2c074-kube-api-access-8plm7\") pod \"auto-csr-approver-29565662-bdzvt\" (UID: \"02835fca-719f-4bb9-8124-624a8fc2c074\") " pod="openshift-infra/auto-csr-approver-29565662-bdzvt" Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.766714 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8plm7\" (UniqueName: \"kubernetes.io/projected/02835fca-719f-4bb9-8124-624a8fc2c074-kube-api-access-8plm7\") pod \"auto-csr-approver-29565662-bdzvt\" (UID: \"02835fca-719f-4bb9-8124-624a8fc2c074\") " pod="openshift-infra/auto-csr-approver-29565662-bdzvt" Mar 19 17:02:00 crc kubenswrapper[4792]: I0319 17:02:00.873470 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565662-bdzvt" Mar 19 17:02:01 crc kubenswrapper[4792]: I0319 17:02:01.275774 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565662-bdzvt"] Mar 19 17:02:01 crc kubenswrapper[4792]: W0319 17:02:01.277409 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02835fca_719f_4bb9_8124_624a8fc2c074.slice/crio-6e44e749f219f509bf048287135285d41d68d0068234631ee2e38ee5a83f027c WatchSource:0}: Error finding container 6e44e749f219f509bf048287135285d41d68d0068234631ee2e38ee5a83f027c: Status 404 returned error can't find the container with id 6e44e749f219f509bf048287135285d41d68d0068234631ee2e38ee5a83f027c Mar 19 17:02:01 crc kubenswrapper[4792]: I0319 17:02:01.917141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565662-bdzvt" event={"ID":"02835fca-719f-4bb9-8124-624a8fc2c074","Type":"ContainerStarted","Data":"6e44e749f219f509bf048287135285d41d68d0068234631ee2e38ee5a83f027c"} Mar 19 17:02:02 crc kubenswrapper[4792]: I0319 17:02:02.926037 4792 generic.go:334] "Generic (PLEG): container finished" podID="02835fca-719f-4bb9-8124-624a8fc2c074" containerID="c3df5665c2b421922eaee727841f02a943dd42dfdeb5a547ab4d2237e0fcc0f2" exitCode=0 Mar 19 17:02:02 crc kubenswrapper[4792]: I0319 17:02:02.926124 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565662-bdzvt" event={"ID":"02835fca-719f-4bb9-8124-624a8fc2c074","Type":"ContainerDied","Data":"c3df5665c2b421922eaee727841f02a943dd42dfdeb5a547ab4d2237e0fcc0f2"} Mar 19 17:02:03 crc kubenswrapper[4792]: I0319 17:02:03.197118 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 17:02:03 crc kubenswrapper[4792]: I0319 17:02:03.694030 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 17:02:04 crc kubenswrapper[4792]: I0319 17:02:04.251151 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565662-bdzvt" Mar 19 17:02:04 crc kubenswrapper[4792]: I0319 17:02:04.409444 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8plm7\" (UniqueName: \"kubernetes.io/projected/02835fca-719f-4bb9-8124-624a8fc2c074-kube-api-access-8plm7\") pod \"02835fca-719f-4bb9-8124-624a8fc2c074\" (UID: \"02835fca-719f-4bb9-8124-624a8fc2c074\") " Mar 19 17:02:04 crc kubenswrapper[4792]: I0319 17:02:04.421106 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02835fca-719f-4bb9-8124-624a8fc2c074-kube-api-access-8plm7" (OuterVolumeSpecName: "kube-api-access-8plm7") pod "02835fca-719f-4bb9-8124-624a8fc2c074" (UID: "02835fca-719f-4bb9-8124-624a8fc2c074"). InnerVolumeSpecName "kube-api-access-8plm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:02:04 crc kubenswrapper[4792]: I0319 17:02:04.512083 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8plm7\" (UniqueName: \"kubernetes.io/projected/02835fca-719f-4bb9-8124-624a8fc2c074-kube-api-access-8plm7\") on node \"crc\" DevicePath \"\"" Mar 19 17:02:04 crc kubenswrapper[4792]: I0319 17:02:04.791238 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6cld2" Mar 19 17:02:04 crc kubenswrapper[4792]: I0319 17:02:04.941962 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565662-bdzvt" event={"ID":"02835fca-719f-4bb9-8124-624a8fc2c074","Type":"ContainerDied","Data":"6e44e749f219f509bf048287135285d41d68d0068234631ee2e38ee5a83f027c"} Mar 19 17:02:04 crc kubenswrapper[4792]: I0319 17:02:04.942005 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e44e749f219f509bf048287135285d41d68d0068234631ee2e38ee5a83f027c" Mar 19 17:02:04 crc kubenswrapper[4792]: I0319 17:02:04.942081 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565662-bdzvt" Mar 19 17:02:05 crc kubenswrapper[4792]: I0319 17:02:05.302826 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565656-7gnrl"] Mar 19 17:02:05 crc kubenswrapper[4792]: I0319 17:02:05.308187 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565656-7gnrl"] Mar 19 17:02:05 crc kubenswrapper[4792]: I0319 17:02:05.748293 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6feb904a-aaa6-415b-9df2-e29655226c0b" path="/var/lib/kubelet/pods/6feb904a-aaa6-415b-9df2-e29655226c0b/volumes" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.553936 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x5qcz"] Mar 19 17:02:07 crc kubenswrapper[4792]: E0319 17:02:07.554709 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02835fca-719f-4bb9-8124-624a8fc2c074" containerName="oc" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.554731 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="02835fca-719f-4bb9-8124-624a8fc2c074" containerName="oc" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.555014 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="02835fca-719f-4bb9-8124-624a8fc2c074" containerName="oc" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.555898 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5qcz" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.560720 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5p7p8" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.562093 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.562141 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.563540 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcs8\" (UniqueName: \"kubernetes.io/projected/bbba3b01-faa4-4f61-8bf0-13bcc2605de0-kube-api-access-rxcs8\") pod \"openstack-operator-index-x5qcz\" (UID: \"bbba3b01-faa4-4f61-8bf0-13bcc2605de0\") " pod="openstack-operators/openstack-operator-index-x5qcz" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.573149 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x5qcz"] Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.665734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcs8\" (UniqueName: \"kubernetes.io/projected/bbba3b01-faa4-4f61-8bf0-13bcc2605de0-kube-api-access-rxcs8\") pod \"openstack-operator-index-x5qcz\" (UID: \"bbba3b01-faa4-4f61-8bf0-13bcc2605de0\") " pod="openstack-operators/openstack-operator-index-x5qcz" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.688345 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcs8\" (UniqueName: \"kubernetes.io/projected/bbba3b01-faa4-4f61-8bf0-13bcc2605de0-kube-api-access-rxcs8\") pod \"openstack-operator-index-x5qcz\" (UID: \"bbba3b01-faa4-4f61-8bf0-13bcc2605de0\") " pod="openstack-operators/openstack-operator-index-x5qcz" Mar 19 17:02:07 crc kubenswrapper[4792]: I0319 17:02:07.871248 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5qcz" Mar 19 17:02:08 crc kubenswrapper[4792]: W0319 17:02:08.289871 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbba3b01_faa4_4f61_8bf0_13bcc2605de0.slice/crio-5623ac490156965fb2740a7c43aebbdee124c185d3396b7883d2b73dd0281d6c WatchSource:0}: Error finding container 5623ac490156965fb2740a7c43aebbdee124c185d3396b7883d2b73dd0281d6c: Status 404 returned error can't find the container with id 5623ac490156965fb2740a7c43aebbdee124c185d3396b7883d2b73dd0281d6c Mar 19 17:02:08 crc kubenswrapper[4792]: I0319 17:02:08.291775 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x5qcz"] Mar 19 17:02:08 crc kubenswrapper[4792]: I0319 17:02:08.995618 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5qcz" event={"ID":"bbba3b01-faa4-4f61-8bf0-13bcc2605de0","Type":"ContainerStarted","Data":"5623ac490156965fb2740a7c43aebbdee124c185d3396b7883d2b73dd0281d6c"} Mar 19 17:02:10 crc kubenswrapper[4792]: I0319 17:02:10.915499 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x5qcz"] Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.011776 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5qcz" event={"ID":"bbba3b01-faa4-4f61-8bf0-13bcc2605de0","Type":"ContainerStarted","Data":"4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e"} Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.029543 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x5qcz" podStartSLOduration=1.87863363 podStartE2EDuration="4.029522678s" podCreationTimestamp="2026-03-19 17:02:07 +0000 UTC" firstStartedPulling="2026-03-19 17:02:08.293298131 +0000 UTC m=+1291.439355671" lastFinishedPulling="2026-03-19 17:02:10.444187179 +0000 UTC m=+1293.590244719" observedRunningTime="2026-03-19 17:02:11.024537363 +0000 UTC m=+1294.170594913" watchObservedRunningTime="2026-03-19 17:02:11.029522678 +0000 UTC m=+1294.175580228" Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.133155 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v9gs9"] Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.135933 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.141183 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v9gs9"] Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.334453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsh5m\" (UniqueName: \"kubernetes.io/projected/2d317332-2487-47d0-b052-eb6bd421c0d1-kube-api-access-xsh5m\") pod \"openstack-operator-index-v9gs9\" (UID: \"2d317332-2487-47d0-b052-eb6bd421c0d1\") " pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.435899 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsh5m\" (UniqueName: \"kubernetes.io/projected/2d317332-2487-47d0-b052-eb6bd421c0d1-kube-api-access-xsh5m\") pod \"openstack-operator-index-v9gs9\" (UID: \"2d317332-2487-47d0-b052-eb6bd421c0d1\") " pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.458987 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsh5m\" (UniqueName: \"kubernetes.io/projected/2d317332-2487-47d0-b052-eb6bd421c0d1-kube-api-access-xsh5m\") pod \"openstack-operator-index-v9gs9\" (UID: \"2d317332-2487-47d0-b052-eb6bd421c0d1\") " pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:11 crc kubenswrapper[4792]: I0319 17:02:11.755959 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:12 crc kubenswrapper[4792]: I0319 17:02:12.019209 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-x5qcz" podUID="bbba3b01-faa4-4f61-8bf0-13bcc2605de0" containerName="registry-server" containerID="cri-o://4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e" gracePeriod=2 Mar 19 17:02:12 crc kubenswrapper[4792]: I0319 17:02:12.185149 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v9gs9"] Mar 19 17:02:12 crc kubenswrapper[4792]: W0319 17:02:12.187711 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d317332_2487_47d0_b052_eb6bd421c0d1.slice/crio-b9ca18b8e39cdbc5fd01871639754734a418f4bd681c81fa4ec92e84b4e4ce6f WatchSource:0}: Error finding container b9ca18b8e39cdbc5fd01871639754734a418f4bd681c81fa4ec92e84b4e4ce6f: Status 404 returned error can't find the container with id b9ca18b8e39cdbc5fd01871639754734a418f4bd681c81fa4ec92e84b4e4ce6f Mar 19 17:02:12 crc kubenswrapper[4792]: I0319 17:02:12.425279 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5qcz" Mar 19 17:02:12 crc kubenswrapper[4792]: I0319 17:02:12.566363 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxcs8\" (UniqueName: \"kubernetes.io/projected/bbba3b01-faa4-4f61-8bf0-13bcc2605de0-kube-api-access-rxcs8\") pod \"bbba3b01-faa4-4f61-8bf0-13bcc2605de0\" (UID: \"bbba3b01-faa4-4f61-8bf0-13bcc2605de0\") " Mar 19 17:02:12 crc kubenswrapper[4792]: I0319 17:02:12.571316 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbba3b01-faa4-4f61-8bf0-13bcc2605de0-kube-api-access-rxcs8" (OuterVolumeSpecName: "kube-api-access-rxcs8") pod "bbba3b01-faa4-4f61-8bf0-13bcc2605de0" (UID: "bbba3b01-faa4-4f61-8bf0-13bcc2605de0"). InnerVolumeSpecName "kube-api-access-rxcs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:02:12 crc kubenswrapper[4792]: I0319 17:02:12.668813 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxcs8\" (UniqueName: \"kubernetes.io/projected/bbba3b01-faa4-4f61-8bf0-13bcc2605de0-kube-api-access-rxcs8\") on node \"crc\" DevicePath \"\"" Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.032382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v9gs9" event={"ID":"2d317332-2487-47d0-b052-eb6bd421c0d1","Type":"ContainerStarted","Data":"8a3671f6da4a27ed50f4a23c001de3b5f6eaa70ae7175f2ff5f47bee71109651"} Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.032750 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v9gs9" event={"ID":"2d317332-2487-47d0-b052-eb6bd421c0d1","Type":"ContainerStarted","Data":"b9ca18b8e39cdbc5fd01871639754734a418f4bd681c81fa4ec92e84b4e4ce6f"} Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.037967 4792 generic.go:334] "Generic (PLEG): container finished" podID="bbba3b01-faa4-4f61-8bf0-13bcc2605de0" containerID="4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e" exitCode=0 Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.037997 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5qcz" event={"ID":"bbba3b01-faa4-4f61-8bf0-13bcc2605de0","Type":"ContainerDied","Data":"4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e"} Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.038038 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5qcz" event={"ID":"bbba3b01-faa4-4f61-8bf0-13bcc2605de0","Type":"ContainerDied","Data":"5623ac490156965fb2740a7c43aebbdee124c185d3396b7883d2b73dd0281d6c"} Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.038060 4792 scope.go:117] "RemoveContainer" containerID="4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e" Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.038066 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5qcz" Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.060382 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v9gs9" podStartSLOduration=2.014406833 podStartE2EDuration="2.060363735s" podCreationTimestamp="2026-03-19 17:02:11 +0000 UTC" firstStartedPulling="2026-03-19 17:02:12.192813666 +0000 UTC m=+1295.338871206" lastFinishedPulling="2026-03-19 17:02:12.238770568 +0000 UTC m=+1295.384828108" observedRunningTime="2026-03-19 17:02:13.047082743 +0000 UTC m=+1296.193140283" watchObservedRunningTime="2026-03-19 17:02:13.060363735 +0000 UTC m=+1296.206421275" Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.067094 4792 scope.go:117] "RemoveContainer" containerID="4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e" Mar 19 17:02:13 crc kubenswrapper[4792]: E0319 17:02:13.067708 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e\": container with ID starting with 4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e not found: ID does not exist" containerID="4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e" Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.067747 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e"} err="failed to get container status \"4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e\": rpc error: code = NotFound desc = could not find container \"4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e\": container with ID starting with 4cff639a8136758e0d20c58987ba1ebc5acb74d4d7e4dcd758c9bae35039859e not found: ID does not exist" Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.078858 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x5qcz"] Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.084706 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-x5qcz"] Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.751751 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbba3b01-faa4-4f61-8bf0-13bcc2605de0" path="/var/lib/kubelet/pods/bbba3b01-faa4-4f61-8bf0-13bcc2605de0/volumes" Mar 19 17:02:13 crc kubenswrapper[4792]: I0319 17:02:13.752451 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nd7zd" Mar 19 17:02:21 crc kubenswrapper[4792]: I0319 17:02:21.757175 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:21 crc kubenswrapper[4792]: I0319 17:02:21.757819 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:21 crc kubenswrapper[4792]: I0319 17:02:21.794591 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:22 crc kubenswrapper[4792]: I0319 17:02:22.126984 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.774803 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj"] Mar 19 17:02:38 crc kubenswrapper[4792]: E0319 17:02:38.775728 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbba3b01-faa4-4f61-8bf0-13bcc2605de0" containerName="registry-server" Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.775742 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbba3b01-faa4-4f61-8bf0-13bcc2605de0" containerName="registry-server" Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.776002 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbba3b01-faa4-4f61-8bf0-13bcc2605de0" containerName="registry-server" Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.777293 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.781284 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jjzql" Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.782756 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj"] Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.912587 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-util\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.912637 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-bundle\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:38 crc kubenswrapper[4792]: I0319 17:02:38.912882 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmzl\" (UniqueName: \"kubernetes.io/projected/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-kube-api-access-xxmzl\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:39 crc kubenswrapper[4792]: I0319 17:02:39.015303 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-util\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:39 crc kubenswrapper[4792]: I0319 17:02:39.015367 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-bundle\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:39 crc kubenswrapper[4792]: I0319 17:02:39.015457 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmzl\" (UniqueName: \"kubernetes.io/projected/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-kube-api-access-xxmzl\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:39 crc kubenswrapper[4792]: I0319 17:02:39.016140 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-util\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:39 crc kubenswrapper[4792]: I0319 17:02:39.016198 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-bundle\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:39 crc kubenswrapper[4792]: I0319 17:02:39.033077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmzl\" (UniqueName: \"kubernetes.io/projected/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-kube-api-access-xxmzl\") pod \"1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:39 crc kubenswrapper[4792]: I0319 17:02:39.098610 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:39 crc kubenswrapper[4792]: I0319 17:02:39.531076 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj"] Mar 19 17:02:39 crc kubenswrapper[4792]: W0319 17:02:39.535231 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d8fec59_30ff_4ec5_a64f_e7e49b58e6b1.slice/crio-9be340948b154ce36fc6c38708eabf3b1f004ba95f2f5df579bd7da5303c14af WatchSource:0}: Error finding container 9be340948b154ce36fc6c38708eabf3b1f004ba95f2f5df579bd7da5303c14af: Status 404 returned error can't find the container with id 9be340948b154ce36fc6c38708eabf3b1f004ba95f2f5df579bd7da5303c14af Mar 19 17:02:40 crc kubenswrapper[4792]: I0319 17:02:40.160639 4792 scope.go:117] "RemoveContainer" containerID="2c97fee6086388388932e3291e13b8fc38b2d1da10887a07cd6ea03e7f06d6a0" Mar 19 17:02:40 crc kubenswrapper[4792]: I0319 17:02:40.239551 4792 generic.go:334] "Generic (PLEG): container finished" podID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerID="1f2db5780897984a645a8e1f55b0f7f7b646b0a54b3b5f3fd83b51e0c78d10ee" exitCode=0 Mar 19 17:02:40 crc kubenswrapper[4792]: I0319 17:02:40.239657 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" event={"ID":"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1","Type":"ContainerDied","Data":"1f2db5780897984a645a8e1f55b0f7f7b646b0a54b3b5f3fd83b51e0c78d10ee"} Mar 19 17:02:40 crc kubenswrapper[4792]: I0319 17:02:40.239727 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" event={"ID":"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1","Type":"ContainerStarted","Data":"9be340948b154ce36fc6c38708eabf3b1f004ba95f2f5df579bd7da5303c14af"} Mar 19 17:02:41 crc kubenswrapper[4792]: I0319 17:02:41.248573 4792 generic.go:334] "Generic (PLEG): container finished" podID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerID="bd540eb9f3e8a913bae73ef81f1cafc81c7cb7f53befc5156060cada01315011" exitCode=0 Mar 19 17:02:41 crc kubenswrapper[4792]: I0319 17:02:41.248640 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" event={"ID":"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1","Type":"ContainerDied","Data":"bd540eb9f3e8a913bae73ef81f1cafc81c7cb7f53befc5156060cada01315011"} Mar 19 17:02:42 crc kubenswrapper[4792]: I0319 17:02:42.259825 4792 generic.go:334] "Generic (PLEG): container finished" podID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerID="399d914c8b02206b45706b2f24adc390a4bbeaf81994a15a73f37221f4a8ad8b" exitCode=0 Mar 19 17:02:42 crc kubenswrapper[4792]: I0319 17:02:42.259930 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" event={"ID":"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1","Type":"ContainerDied","Data":"399d914c8b02206b45706b2f24adc390a4bbeaf81994a15a73f37221f4a8ad8b"} Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.562049 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.689588 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-util\") pod \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.690148 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxmzl\" (UniqueName: \"kubernetes.io/projected/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-kube-api-access-xxmzl\") pod \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.690175 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-bundle\") pod \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\" (UID: \"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1\") " Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.691305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-bundle" (OuterVolumeSpecName: "bundle") pod "2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" (UID: "2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.697470 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-kube-api-access-xxmzl" (OuterVolumeSpecName: "kube-api-access-xxmzl") pod "2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" (UID: "2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1"). InnerVolumeSpecName "kube-api-access-xxmzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.703253 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-util" (OuterVolumeSpecName: "util") pod "2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" (UID: "2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.791988 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxmzl\" (UniqueName: \"kubernetes.io/projected/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-kube-api-access-xxmzl\") on node \"crc\" DevicePath \"\"" Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.792044 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:02:43 crc kubenswrapper[4792]: I0319 17:02:43.792053 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1-util\") on node \"crc\" DevicePath \"\"" Mar 19 17:02:44 crc kubenswrapper[4792]: I0319 17:02:44.276365 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" event={"ID":"2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1","Type":"ContainerDied","Data":"9be340948b154ce36fc6c38708eabf3b1f004ba95f2f5df579bd7da5303c14af"} Mar 19 17:02:44 crc kubenswrapper[4792]: I0319 17:02:44.276405 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9be340948b154ce36fc6c38708eabf3b1f004ba95f2f5df579bd7da5303c14af" Mar 19 17:02:44 crc kubenswrapper[4792]: I0319 17:02:44.276513 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.231321 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.231664 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.938578 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx"] Mar 19 17:02:50 crc kubenswrapper[4792]: E0319 17:02:50.939290 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerName="extract" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.939412 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerName="extract" Mar 19 17:02:50 crc kubenswrapper[4792]: E0319 17:02:50.939508 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerName="pull" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.939583 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerName="pull" Mar 19 17:02:50 crc kubenswrapper[4792]: E0319 17:02:50.939661 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerName="util" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.939736 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerName="util" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.940034 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1" containerName="extract" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.940876 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.942867 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qhkvl" Mar 19 17:02:50 crc kubenswrapper[4792]: I0319 17:02:50.970171 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx"] Mar 19 17:02:51 crc kubenswrapper[4792]: I0319 17:02:51.038212 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4x6v\" (UniqueName: \"kubernetes.io/projected/2f5d3346-4746-45e3-a73e-3d94d586e34d-kube-api-access-x4x6v\") pod \"openstack-operator-controller-init-7658474f4d-cpqrx\" (UID: \"2f5d3346-4746-45e3-a73e-3d94d586e34d\") " pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 17:02:51 crc kubenswrapper[4792]: I0319 17:02:51.139631 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4x6v\" (UniqueName: \"kubernetes.io/projected/2f5d3346-4746-45e3-a73e-3d94d586e34d-kube-api-access-x4x6v\") pod \"openstack-operator-controller-init-7658474f4d-cpqrx\" (UID: \"2f5d3346-4746-45e3-a73e-3d94d586e34d\") " pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 17:02:51 crc kubenswrapper[4792]: I0319 17:02:51.164210 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4x6v\" (UniqueName: \"kubernetes.io/projected/2f5d3346-4746-45e3-a73e-3d94d586e34d-kube-api-access-x4x6v\") pod \"openstack-operator-controller-init-7658474f4d-cpqrx\" (UID: \"2f5d3346-4746-45e3-a73e-3d94d586e34d\") " pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 17:02:51 crc kubenswrapper[4792]: I0319 17:02:51.258354 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 17:02:51 crc kubenswrapper[4792]: I0319 17:02:51.722308 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx"] Mar 19 17:02:52 crc kubenswrapper[4792]: I0319 17:02:52.337758 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" event={"ID":"2f5d3346-4746-45e3-a73e-3d94d586e34d","Type":"ContainerStarted","Data":"7d169b840493b3d65f313d3aaea80dda33c6ee42cb5df9226c32a9555abcf815"} Mar 19 17:02:57 crc kubenswrapper[4792]: I0319 17:02:57.381790 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" event={"ID":"2f5d3346-4746-45e3-a73e-3d94d586e34d","Type":"ContainerStarted","Data":"ab238061fa58a0c097c8698f88253c5a779985601bd219520a6dc071a1baa76d"} Mar 19 17:02:57 crc kubenswrapper[4792]: I0319 17:02:57.382356 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 17:02:57 crc kubenswrapper[4792]: I0319 17:02:57.409930 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" podStartSLOduration=2.855363418 podStartE2EDuration="7.409912702s" podCreationTimestamp="2026-03-19 17:02:50 +0000 UTC" firstStartedPulling="2026-03-19 17:02:51.730866139 +0000 UTC m=+1334.876923679" lastFinishedPulling="2026-03-19 17:02:56.285415423 +0000 UTC m=+1339.431472963" observedRunningTime="2026-03-19 17:02:57.407743942 +0000 UTC m=+1340.553801492" watchObservedRunningTime="2026-03-19 17:02:57.409912702 +0000 UTC m=+1340.555970242" Mar 19 17:03:02 crc kubenswrapper[4792]: I0319 17:03:02.140513 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 17:03:20 crc kubenswrapper[4792]: I0319 17:03:20.230832 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:03:20 crc kubenswrapper[4792]: I0319 17:03:20.231399 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.802353 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f"] Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.803868 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.806630 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-7hr4b" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.811191 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l"] Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.819163 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.826278 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8vnj8" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.843181 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f"] Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.886582 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxv2\" (UniqueName: \"kubernetes.io/projected/9bb5702e-9617-4fb3-a13b-32aa8f7209bc-kube-api-access-lsxv2\") pod \"barbican-operator-controller-manager-59bc569d95-2487f\" (UID: \"9bb5702e-9617-4fb3-a13b-32aa8f7209bc\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.887061 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jlt\" (UniqueName: \"kubernetes.io/projected/a1ed7ec7-1763-4593-a115-448e7da65482-kube-api-access-m4jlt\") pod \"cinder-operator-controller-manager-8d58dc466-rd29l\" (UID: \"a1ed7ec7-1763-4593-a115-448e7da65482\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.891764 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l"] Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.920542 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d"] Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.921954 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.927732 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-46p69" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.931232 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z"] Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.932154 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.933779 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-d525k" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.947304 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d"] Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.961334 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl"] Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.965524 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.967627 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4fbls" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.988636 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxv2\" (UniqueName: \"kubernetes.io/projected/9bb5702e-9617-4fb3-a13b-32aa8f7209bc-kube-api-access-lsxv2\") pod \"barbican-operator-controller-manager-59bc569d95-2487f\" (UID: \"9bb5702e-9617-4fb3-a13b-32aa8f7209bc\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.988714 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk896\" (UniqueName: \"kubernetes.io/projected/29961080-94d4-4275-8d1a-baf1405cf2bb-kube-api-access-xk896\") pod \"glance-operator-controller-manager-79df6bcc97-8272z\" (UID: \"29961080-94d4-4275-8d1a-baf1405cf2bb\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.988801 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsz4l\" (UniqueName: \"kubernetes.io/projected/c82a8813-bf57-4e7c-88fb-34b0ebee51be-kube-api-access-xsz4l\") pod \"designate-operator-controller-manager-588d4d986b-cn88d\" (UID: \"c82a8813-bf57-4e7c-88fb-34b0ebee51be\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.988825 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jlt\" (UniqueName: \"kubernetes.io/projected/a1ed7ec7-1763-4593-a115-448e7da65482-kube-api-access-m4jlt\") pod \"cinder-operator-controller-manager-8d58dc466-rd29l\" (UID: \"a1ed7ec7-1763-4593-a115-448e7da65482\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" Mar 19 17:03:29 crc kubenswrapper[4792]: I0319 17:03:29.995390 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.017735 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.028617 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxv2\" (UniqueName: \"kubernetes.io/projected/9bb5702e-9617-4fb3-a13b-32aa8f7209bc-kube-api-access-lsxv2\") pod \"barbican-operator-controller-manager-59bc569d95-2487f\" (UID: \"9bb5702e-9617-4fb3-a13b-32aa8f7209bc\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.030957 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.032396 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.042245 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-h869c" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.044428 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jlt\" (UniqueName: \"kubernetes.io/projected/a1ed7ec7-1763-4593-a115-448e7da65482-kube-api-access-m4jlt\") pod \"cinder-operator-controller-manager-8d58dc466-rd29l\" (UID: \"a1ed7ec7-1763-4593-a115-448e7da65482\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.047905 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.063700 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.065015 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.066539 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.068745 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.073219 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-fsnst" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.081286 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.082261 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.091487 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-n5n4m" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.092416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk896\" (UniqueName: \"kubernetes.io/projected/29961080-94d4-4275-8d1a-baf1405cf2bb-kube-api-access-xk896\") pod \"glance-operator-controller-manager-79df6bcc97-8272z\" (UID: \"29961080-94d4-4275-8d1a-baf1405cf2bb\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.092453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz5g8\" (UniqueName: \"kubernetes.io/projected/bce0486f-f235-464e-acd7-bc8da076eebe-kube-api-access-gz5g8\") pod \"horizon-operator-controller-manager-8464cc45fb-zkx8w\" (UID: \"bce0486f-f235-464e-acd7-bc8da076eebe\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.092509 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhzbt\" (UniqueName: \"kubernetes.io/projected/335bce01-df52-41ca-b47a-daa5e8ac917e-kube-api-access-lhzbt\") pod \"heat-operator-controller-manager-67dd5f86f5-v6tfl\" (UID: \"335bce01-df52-41ca-b47a-daa5e8ac917e\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.092553 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsz4l\" (UniqueName: \"kubernetes.io/projected/c82a8813-bf57-4e7c-88fb-34b0ebee51be-kube-api-access-xsz4l\") pod \"designate-operator-controller-manager-588d4d986b-cn88d\" (UID: \"c82a8813-bf57-4e7c-88fb-34b0ebee51be\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.103096 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.127387 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk896\" (UniqueName: \"kubernetes.io/projected/29961080-94d4-4275-8d1a-baf1405cf2bb-kube-api-access-xk896\") pod \"glance-operator-controller-manager-79df6bcc97-8272z\" (UID: \"29961080-94d4-4275-8d1a-baf1405cf2bb\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.133815 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.134785 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.136519 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9df7k" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.136879 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsz4l\" (UniqueName: \"kubernetes.io/projected/c82a8813-bf57-4e7c-88fb-34b0ebee51be-kube-api-access-xsz4l\") pod \"designate-operator-controller-manager-588d4d986b-cn88d\" (UID: \"c82a8813-bf57-4e7c-88fb-34b0ebee51be\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.146309 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.147416 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.151428 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.154134 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jmhhk" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.162183 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.163382 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.169105 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dj92g" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.183376 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.193674 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbn4n\" (UniqueName: \"kubernetes.io/projected/b7f6258a-2ce1-482c-84ee-e869f191cb69-kube-api-access-jbn4n\") pod \"ironic-operator-controller-manager-6f787dddc9-lkhgd\" (UID: \"b7f6258a-2ce1-482c-84ee-e869f191cb69\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.194052 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.194089 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpmn5\" (UniqueName: \"kubernetes.io/projected/ae024059-6924-482c-88b6-c845e6932026-kube-api-access-rpmn5\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.194127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qpz7\" (UniqueName: \"kubernetes.io/projected/80afdbc0-ff4c-4806-884d-ef3542b4de9c-kube-api-access-5qpz7\") pod \"keystone-operator-controller-manager-768b96df4c-s2pjr\" (UID: \"80afdbc0-ff4c-4806-884d-ef3542b4de9c\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.194248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7hdn\" (UniqueName: \"kubernetes.io/projected/d14a657c-5e70-4847-9b07-f85ce53d7757-kube-api-access-n7hdn\") pod \"manila-operator-controller-manager-55f864c847-h5w4z\" (UID: \"d14a657c-5e70-4847-9b07-f85ce53d7757\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.194302 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz5g8\" (UniqueName: \"kubernetes.io/projected/bce0486f-f235-464e-acd7-bc8da076eebe-kube-api-access-gz5g8\") pod \"horizon-operator-controller-manager-8464cc45fb-zkx8w\" (UID: \"bce0486f-f235-464e-acd7-bc8da076eebe\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.194489 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhzbt\" (UniqueName: \"kubernetes.io/projected/335bce01-df52-41ca-b47a-daa5e8ac917e-kube-api-access-lhzbt\") pod \"heat-operator-controller-manager-67dd5f86f5-v6tfl\" (UID: \"335bce01-df52-41ca-b47a-daa5e8ac917e\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.195222 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.217555 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhzbt\" (UniqueName: \"kubernetes.io/projected/335bce01-df52-41ca-b47a-daa5e8ac917e-kube-api-access-lhzbt\") pod \"heat-operator-controller-manager-67dd5f86f5-v6tfl\" (UID: \"335bce01-df52-41ca-b47a-daa5e8ac917e\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.220686 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.221628 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.223216 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nvvqp" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.243999 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.249898 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.250285 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.260623 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.266223 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.270080 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.271828 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.272909 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.273656 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.279160 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cbsqc" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.279422 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-r5k2f" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.279652 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.283549 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz5g8\" (UniqueName: \"kubernetes.io/projected/bce0486f-f235-464e-acd7-bc8da076eebe-kube-api-access-gz5g8\") pod \"horizon-operator-controller-manager-8464cc45fb-zkx8w\" (UID: \"bce0486f-f235-464e-acd7-bc8da076eebe\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.295533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qpz7\" (UniqueName: \"kubernetes.io/projected/80afdbc0-ff4c-4806-884d-ef3542b4de9c-kube-api-access-5qpz7\") pod \"keystone-operator-controller-manager-768b96df4c-s2pjr\" (UID: \"80afdbc0-ff4c-4806-884d-ef3542b4de9c\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.295586 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7hdn\" (UniqueName: \"kubernetes.io/projected/d14a657c-5e70-4847-9b07-f85ce53d7757-kube-api-access-n7hdn\") pod \"manila-operator-controller-manager-55f864c847-h5w4z\" (UID: \"d14a657c-5e70-4847-9b07-f85ce53d7757\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.295660 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbn4n\" (UniqueName: \"kubernetes.io/projected/b7f6258a-2ce1-482c-84ee-e869f191cb69-kube-api-access-jbn4n\") pod \"ironic-operator-controller-manager-6f787dddc9-lkhgd\" (UID: \"b7f6258a-2ce1-482c-84ee-e869f191cb69\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.295692 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnzwp\" (UniqueName: \"kubernetes.io/projected/d89e09ff-441b-491e-98f7-9bf618322505-kube-api-access-hnzwp\") pod \"neutron-operator-controller-manager-767865f676-mdbhz\" (UID: \"d89e09ff-441b-491e-98f7-9bf618322505\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.295725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.295753 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpmn5\" (UniqueName: \"kubernetes.io/projected/ae024059-6924-482c-88b6-c845e6932026-kube-api-access-rpmn5\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.295788 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vfc\" (UniqueName: \"kubernetes.io/projected/ca8f4495-eabc-425f-82dd-f3c5329de925-kube-api-access-q2vfc\") pod \"mariadb-operator-controller-manager-67ccfc9778-b66p7\" (UID: \"ca8f4495-eabc-425f-82dd-f3c5329de925\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 17:03:30 crc kubenswrapper[4792]: E0319 17:03:30.296399 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:30 crc kubenswrapper[4792]: E0319 17:03:30.296437 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert podName:ae024059-6924-482c-88b6-c845e6932026 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:30.796423371 +0000 UTC m=+1373.942480911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert") pod "infra-operator-controller-manager-7b9c774f96-p22vv" (UID: "ae024059-6924-482c-88b6-c845e6932026") : secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.305535 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.314994 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbn4n\" (UniqueName: \"kubernetes.io/projected/b7f6258a-2ce1-482c-84ee-e869f191cb69-kube-api-access-jbn4n\") pod \"ironic-operator-controller-manager-6f787dddc9-lkhgd\" (UID: \"b7f6258a-2ce1-482c-84ee-e869f191cb69\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.316339 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7hdn\" (UniqueName: \"kubernetes.io/projected/d14a657c-5e70-4847-9b07-f85ce53d7757-kube-api-access-n7hdn\") pod \"manila-operator-controller-manager-55f864c847-h5w4z\" (UID: \"d14a657c-5e70-4847-9b07-f85ce53d7757\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.319293 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.322366 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpmn5\" (UniqueName: \"kubernetes.io/projected/ae024059-6924-482c-88b6-c845e6932026-kube-api-access-rpmn5\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.326499 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qpz7\" (UniqueName: \"kubernetes.io/projected/80afdbc0-ff4c-4806-884d-ef3542b4de9c-kube-api-access-5qpz7\") pod \"keystone-operator-controller-manager-768b96df4c-s2pjr\" (UID: \"80afdbc0-ff4c-4806-884d-ef3542b4de9c\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.351851 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.353699 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.358590 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.358602 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2sl2n" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.377050 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-7xldx"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.397633 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.409494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnzwp\" (UniqueName: \"kubernetes.io/projected/d89e09ff-441b-491e-98f7-9bf618322505-kube-api-access-hnzwp\") pod \"neutron-operator-controller-manager-767865f676-mdbhz\" (UID: \"d89e09ff-441b-491e-98f7-9bf618322505\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.409624 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.409654 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjcx\" (UniqueName: \"kubernetes.io/projected/33f808bd-605c-41c7-94fb-92ceab7de0a9-kube-api-access-4hjcx\") pod \"nova-operator-controller-manager-5d488d59fb-dz5pk\" (UID: \"33f808bd-605c-41c7-94fb-92ceab7de0a9\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.410015 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2d79\" (UniqueName: \"kubernetes.io/projected/29107ce9-41d6-410b-b256-723555fd6169-kube-api-access-c2d79\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.410093 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vfc\" (UniqueName: \"kubernetes.io/projected/ca8f4495-eabc-425f-82dd-f3c5329de925-kube-api-access-q2vfc\") pod \"mariadb-operator-controller-manager-67ccfc9778-b66p7\" (UID: \"ca8f4495-eabc-425f-82dd-f3c5329de925\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.410745 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjth5\" (UniqueName: \"kubernetes.io/projected/74eec49e-2c05-49ce-874b-654ec80018e6-kube-api-access-mjth5\") pod \"octavia-operator-controller-manager-5b9f45d989-mt22x\" (UID: \"74eec49e-2c05-49ce-874b-654ec80018e6\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.415138 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jswxp" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.417270 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.442360 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vfc\" (UniqueName: \"kubernetes.io/projected/ca8f4495-eabc-425f-82dd-f3c5329de925-kube-api-access-q2vfc\") pod \"mariadb-operator-controller-manager-67ccfc9778-b66p7\" (UID: \"ca8f4495-eabc-425f-82dd-f3c5329de925\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.462123 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnzwp\" (UniqueName: \"kubernetes.io/projected/d89e09ff-441b-491e-98f7-9bf618322505-kube-api-access-hnzwp\") pod \"neutron-operator-controller-manager-767865f676-mdbhz\" (UID: \"d89e09ff-441b-491e-98f7-9bf618322505\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.504951 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.515541 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjth5\" (UniqueName: \"kubernetes.io/projected/74eec49e-2c05-49ce-874b-654ec80018e6-kube-api-access-mjth5\") pod \"octavia-operator-controller-manager-5b9f45d989-mt22x\" (UID: \"74eec49e-2c05-49ce-874b-654ec80018e6\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.515611 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.515631 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjcx\" (UniqueName: \"kubernetes.io/projected/33f808bd-605c-41c7-94fb-92ceab7de0a9-kube-api-access-4hjcx\") pod \"nova-operator-controller-manager-5d488d59fb-dz5pk\" (UID: \"33f808bd-605c-41c7-94fb-92ceab7de0a9\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.515676 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2d79\" (UniqueName: \"kubernetes.io/projected/29107ce9-41d6-410b-b256-723555fd6169-kube-api-access-c2d79\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.515704 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47hlc\" (UniqueName: \"kubernetes.io/projected/e4f68cf5-d501-4468-a9a4-b959ae49db87-kube-api-access-47hlc\") pod \"ovn-operator-controller-manager-884679f54-7xldx\" (UID: \"e4f68cf5-d501-4468-a9a4-b959ae49db87\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" Mar 19 17:03:30 crc kubenswrapper[4792]: E0319 17:03:30.516495 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:30 crc kubenswrapper[4792]: E0319 17:03:30.516546 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert podName:29107ce9-41d6-410b-b256-723555fd6169 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:31.016529678 +0000 UTC m=+1374.162587228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" (UID: "29107ce9-41d6-410b-b256-723555fd6169") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.518850 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.553513 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-7xldx"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.556975 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjth5\" (UniqueName: \"kubernetes.io/projected/74eec49e-2c05-49ce-874b-654ec80018e6-kube-api-access-mjth5\") pod \"octavia-operator-controller-manager-5b9f45d989-mt22x\" (UID: \"74eec49e-2c05-49ce-874b-654ec80018e6\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.559583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2d79\" (UniqueName: \"kubernetes.io/projected/29107ce9-41d6-410b-b256-723555fd6169-kube-api-access-c2d79\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.564717 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjcx\" (UniqueName: \"kubernetes.io/projected/33f808bd-605c-41c7-94fb-92ceab7de0a9-kube-api-access-4hjcx\") pod \"nova-operator-controller-manager-5d488d59fb-dz5pk\" (UID: \"33f808bd-605c-41c7-94fb-92ceab7de0a9\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.574910 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.576126 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.583021 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qrfm8" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.590208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.601336 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-p4npr"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.606125 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.608449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.618153 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.621455 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqgbj\" (UniqueName: \"kubernetes.io/projected/6832677c-467f-4786-b2f8-9c999c94f3ba-kube-api-access-mqgbj\") pod \"placement-operator-controller-manager-5784578c99-gkg4f\" (UID: \"6832677c-467f-4786-b2f8-9c999c94f3ba\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.621500 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47hlc\" (UniqueName: \"kubernetes.io/projected/e4f68cf5-d501-4468-a9a4-b959ae49db87-kube-api-access-47hlc\") pod \"ovn-operator-controller-manager-884679f54-7xldx\" (UID: \"e4f68cf5-d501-4468-a9a4-b959ae49db87\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.624487 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.640432 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.640757 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fq8kh" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.649287 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-p4npr"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.650081 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.678811 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.680652 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47hlc\" (UniqueName: \"kubernetes.io/projected/e4f68cf5-d501-4468-a9a4-b959ae49db87-kube-api-access-47hlc\") pod \"ovn-operator-controller-manager-884679f54-7xldx\" (UID: \"e4f68cf5-d501-4468-a9a4-b959ae49db87\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.690385 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.691470 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.694858 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6ggbx" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.724888 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqgbj\" (UniqueName: \"kubernetes.io/projected/6832677c-467f-4786-b2f8-9c999c94f3ba-kube-api-access-mqgbj\") pod \"placement-operator-controller-manager-5784578c99-gkg4f\" (UID: \"6832677c-467f-4786-b2f8-9c999c94f3ba\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.725202 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcfmf\" (UniqueName: \"kubernetes.io/projected/2dceb468-ce3f-4650-ae5e-694664ffb360-kube-api-access-vcfmf\") pod \"swift-operator-controller-manager-c674c5965-p4npr\" (UID: \"2dceb468-ce3f-4650-ae5e-694664ffb360\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.725307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpdr\" (UniqueName: \"kubernetes.io/projected/91a44cfc-5acd-4b7c-814c-1521b5e2b85d-kube-api-access-pxpdr\") pod \"telemetry-operator-controller-manager-78877dc965-lmkcj\" (UID: \"91a44cfc-5acd-4b7c-814c-1521b5e2b85d\") " pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.725992 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.741569 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqgbj\" (UniqueName: \"kubernetes.io/projected/6832677c-467f-4786-b2f8-9c999c94f3ba-kube-api-access-mqgbj\") pod \"placement-operator-controller-manager-5784578c99-gkg4f\" (UID: \"6832677c-467f-4786-b2f8-9c999c94f3ba\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.760778 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.762091 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.768510 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5cl4v" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.770894 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.790605 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.830091 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.830191 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcfmf\" (UniqueName: \"kubernetes.io/projected/2dceb468-ce3f-4650-ae5e-694664ffb360-kube-api-access-vcfmf\") pod \"swift-operator-controller-manager-c674c5965-p4npr\" (UID: \"2dceb468-ce3f-4650-ae5e-694664ffb360\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.830436 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d2vl\" (UniqueName: \"kubernetes.io/projected/23c3a809-9d7c-4d60-be1f-2fbc1583e5d6-kube-api-access-6d2vl\") pod \"test-operator-controller-manager-5c5cb9c4d7-7sklh\" (UID: \"23c3a809-9d7c-4d60-be1f-2fbc1583e5d6\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.830494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpdr\" (UniqueName: \"kubernetes.io/projected/91a44cfc-5acd-4b7c-814c-1521b5e2b85d-kube-api-access-pxpdr\") pod \"telemetry-operator-controller-manager-78877dc965-lmkcj\" (UID: \"91a44cfc-5acd-4b7c-814c-1521b5e2b85d\") " pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 17:03:30 crc kubenswrapper[4792]: E0319 17:03:30.830948 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:30 crc kubenswrapper[4792]: E0319 17:03:30.830994 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert podName:ae024059-6924-482c-88b6-c845e6932026 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:31.830978997 +0000 UTC m=+1374.977036537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert") pod "infra-operator-controller-manager-7b9c774f96-p22vv" (UID: "ae024059-6924-482c-88b6-c845e6932026") : secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.863371 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcfmf\" (UniqueName: \"kubernetes.io/projected/2dceb468-ce3f-4650-ae5e-694664ffb360-kube-api-access-vcfmf\") pod \"swift-operator-controller-manager-c674c5965-p4npr\" (UID: \"2dceb468-ce3f-4650-ae5e-694664ffb360\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" Mar 19 17:03:30 crc kubenswrapper[4792]: W0319 17:03:30.868271 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bb5702e_9617_4fb3_a13b_32aa8f7209bc.slice/crio-a3aa3281de3fbbddfa8f64445012b4c3650b348cff54532671743d1b19ac3d7d WatchSource:0}: Error finding container a3aa3281de3fbbddfa8f64445012b4c3650b348cff54532671743d1b19ac3d7d: Status 404 returned error can't find the container with id a3aa3281de3fbbddfa8f64445012b4c3650b348cff54532671743d1b19ac3d7d Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.868421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpdr\" (UniqueName: \"kubernetes.io/projected/91a44cfc-5acd-4b7c-814c-1521b5e2b85d-kube-api-access-pxpdr\") pod \"telemetry-operator-controller-manager-78877dc965-lmkcj\" (UID: \"91a44cfc-5acd-4b7c-814c-1521b5e2b85d\") " pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.876572 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.879169 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.883328 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mgc96" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.910451 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.913772 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.931584 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.932862 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.934796 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdmzv\" (UniqueName: \"kubernetes.io/projected/1ca9378b-68d2-4281-b45a-7f40c30bae7c-kube-api-access-kdmzv\") pod \"watcher-operator-controller-manager-6c4d75f7f9-rg6qq\" (UID: \"1ca9378b-68d2-4281-b45a-7f40c30bae7c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.935377 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d2vl\" (UniqueName: \"kubernetes.io/projected/23c3a809-9d7c-4d60-be1f-2fbc1583e5d6-kube-api-access-6d2vl\") pod \"test-operator-controller-manager-5c5cb9c4d7-7sklh\" (UID: \"23c3a809-9d7c-4d60-be1f-2fbc1583e5d6\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.938311 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qknw2" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.938261 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.938800 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.942572 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.957739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d2vl\" (UniqueName: \"kubernetes.io/projected/23c3a809-9d7c-4d60-be1f-2fbc1583e5d6-kube-api-access-6d2vl\") pod \"test-operator-controller-manager-5c5cb9c4d7-7sklh\" (UID: \"23c3a809-9d7c-4d60-be1f-2fbc1583e5d6\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.971184 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv"] Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.972326 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.976366 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-n6bsw" Mar 19 17:03:30 crc kubenswrapper[4792]: I0319 17:03:30.983948 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv"] Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.025811 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f"] Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.037034 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.038073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.038190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.038211 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.038254 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v77gj\" (UniqueName: \"kubernetes.io/projected/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-kube-api-access-v77gj\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.038291 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdmzv\" (UniqueName: \"kubernetes.io/projected/1ca9378b-68d2-4281-b45a-7f40c30bae7c-kube-api-access-kdmzv\") pod \"watcher-operator-controller-manager-6c4d75f7f9-rg6qq\" (UID: \"1ca9378b-68d2-4281-b45a-7f40c30bae7c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.038323 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l74x6\" (UniqueName: \"kubernetes.io/projected/5458fc2b-b774-488b-a5e0-1f66d2df8bfc-kube-api-access-l74x6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-x2pbv\" (UID: \"5458fc2b-b774-488b-a5e0-1f66d2df8bfc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.038449 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.038493 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert podName:29107ce9-41d6-410b-b256-723555fd6169 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:32.038480091 +0000 UTC m=+1375.184537631 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" (UID: "29107ce9-41d6-410b-b256-723555fd6169") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.079479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdmzv\" (UniqueName: \"kubernetes.io/projected/1ca9378b-68d2-4281-b45a-7f40c30bae7c-kube-api-access-kdmzv\") pod \"watcher-operator-controller-manager-6c4d75f7f9-rg6qq\" (UID: \"1ca9378b-68d2-4281-b45a-7f40c30bae7c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.096142 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.131088 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.142042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.142557 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.145609 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.145712 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:31.645692332 +0000 UTC m=+1374.791749872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "metrics-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.146071 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.146106 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:31.646094133 +0000 UTC m=+1374.792151673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "webhook-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.142768 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v77gj\" (UniqueName: \"kubernetes.io/projected/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-kube-api-access-v77gj\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.154723 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l74x6\" (UniqueName: \"kubernetes.io/projected/5458fc2b-b774-488b-a5e0-1f66d2df8bfc-kube-api-access-l74x6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-x2pbv\" (UID: \"5458fc2b-b774-488b-a5e0-1f66d2df8bfc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.179962 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v77gj\" (UniqueName: \"kubernetes.io/projected/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-kube-api-access-v77gj\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.203215 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l74x6\" (UniqueName: \"kubernetes.io/projected/5458fc2b-b774-488b-a5e0-1f66d2df8bfc-kube-api-access-l74x6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-x2pbv\" (UID: \"5458fc2b-b774-488b-a5e0-1f66d2df8bfc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.219021 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z"] Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.231610 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.265870 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.416820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" event={"ID":"29961080-94d4-4275-8d1a-baf1405cf2bb","Type":"ContainerStarted","Data":"df10548326bad29f08d97518d0b87c21db1198ad8a524014e0e087fc073d7295"} Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.418539 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" event={"ID":"9bb5702e-9617-4fb3-a13b-32aa8f7209bc","Type":"ContainerStarted","Data":"a3aa3281de3fbbddfa8f64445012b4c3650b348cff54532671743d1b19ac3d7d"} Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.580668 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d"] Mar 19 17:03:31 crc kubenswrapper[4792]: W0319 17:03:31.589039 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82a8813_bf57_4e7c_88fb_34b0ebee51be.slice/crio-0d1534464ff0b065a51cb541cd8f4cd5bb7f0d635652d35745edb377cdf7733c WatchSource:0}: Error finding container 0d1534464ff0b065a51cb541cd8f4cd5bb7f0d635652d35745edb377cdf7733c: Status 404 returned error can't find the container with id 0d1534464ff0b065a51cb541cd8f4cd5bb7f0d635652d35745edb377cdf7733c Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.622227 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l"] Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.663609 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.663760 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.663827 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.663934 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:32.663913493 +0000 UTC m=+1375.809971033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "webhook-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.664018 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.664123 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:32.664101218 +0000 UTC m=+1375.810158838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "metrics-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.817009 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w"] Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.840292 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr"] Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.848305 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl"] Mar 19 17:03:31 crc kubenswrapper[4792]: I0319 17:03:31.869527 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.869707 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:31 crc kubenswrapper[4792]: E0319 17:03:31.869804 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert podName:ae024059-6924-482c-88b6-c845e6932026 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:33.869777373 +0000 UTC m=+1377.015834983 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert") pod "infra-operator-controller-manager-7b9c774f96-p22vv" (UID: "ae024059-6924-482c-88b6-c845e6932026") : secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.074132 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:32 crc kubenswrapper[4792]: E0319 17:03:32.074343 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:32 crc kubenswrapper[4792]: E0319 17:03:32.074423 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert podName:29107ce9-41d6-410b-b256-723555fd6169 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:34.074404418 +0000 UTC m=+1377.220461958 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" (UID: "29107ce9-41d6-410b-b256-723555fd6169") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.456590 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" event={"ID":"a1ed7ec7-1763-4593-a115-448e7da65482","Type":"ContainerStarted","Data":"79a08bd846c6591a404f8384403445d2d181da4c442ee2011999b8037a3e1151"} Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.474661 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" event={"ID":"80afdbc0-ff4c-4806-884d-ef3542b4de9c","Type":"ContainerStarted","Data":"2810ec7f7502ee1694bc9f40d21aac392934860017a068223c5ae3db01c6c875"} Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.490370 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" event={"ID":"bce0486f-f235-464e-acd7-bc8da076eebe","Type":"ContainerStarted","Data":"5a717ff79b338612116104cb3db97944f26da06f76a2a07898051ef0733f9c74"} Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.501232 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-7xldx"] Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.503820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" event={"ID":"c82a8813-bf57-4e7c-88fb-34b0ebee51be","Type":"ContainerStarted","Data":"0d1534464ff0b065a51cb541cd8f4cd5bb7f0d635652d35745edb377cdf7733c"} Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.515518 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" event={"ID":"335bce01-df52-41ca-b47a-daa5e8ac917e","Type":"ContainerStarted","Data":"6592b5006d5bb743246c8944fe13a36df7fcdf2ff1f48e12988834d83fb8162c"} Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.526132 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz"] Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.558999 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh"] Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.587345 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk"] Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.633986 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7"] Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.653650 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z"] Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.661523 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd"] Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.688620 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x"] Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.708726 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:32 crc kubenswrapper[4792]: I0319 17:03:32.709504 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:32 crc kubenswrapper[4792]: E0319 17:03:32.709311 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 17:03:32 crc kubenswrapper[4792]: E0319 17:03:32.709739 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:34.709718789 +0000 UTC m=+1377.855776329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "webhook-server-cert" not found Mar 19 17:03:32 crc kubenswrapper[4792]: E0319 17:03:32.709664 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 17:03:32 crc kubenswrapper[4792]: E0319 17:03:32.710156 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:34.71010502 +0000 UTC m=+1377.856162560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "metrics-server-cert" not found Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.050218 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq"] Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.068518 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj"] Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.084474 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-p4npr"] Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.112940 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv"] Mar 19 17:03:33 crc kubenswrapper[4792]: W0319 17:03:33.121517 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91a44cfc_5acd_4b7c_814c_1521b5e2b85d.slice/crio-3fb11c1d79c488424bb3f446cc0b12e7f7a138bf5e90590a4cc43226d4f8a6e4 WatchSource:0}: Error finding container 3fb11c1d79c488424bb3f446cc0b12e7f7a138bf5e90590a4cc43226d4f8a6e4: Status 404 returned error can't find the container with id 3fb11c1d79c488424bb3f446cc0b12e7f7a138bf5e90590a4cc43226d4f8a6e4 Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.132998 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f"] Mar 19 17:03:33 crc kubenswrapper[4792]: W0319 17:03:33.154601 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5458fc2b_b774_488b_a5e0_1f66d2df8bfc.slice/crio-75b95cc78d49ca4ed9290d6f210a8833cff4b162b04bc7a5be7dd67ad0da8af6 WatchSource:0}: Error finding container 75b95cc78d49ca4ed9290d6f210a8833cff4b162b04bc7a5be7dd67ad0da8af6: Status 404 returned error can't find the container with id 75b95cc78d49ca4ed9290d6f210a8833cff4b162b04bc7a5be7dd67ad0da8af6 Mar 19 17:03:33 crc kubenswrapper[4792]: E0319 17:03:33.174597 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcfmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-c674c5965-p4npr_openstack-operators(2dceb468-ce3f-4650-ae5e-694664ffb360): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 19 17:03:33 crc kubenswrapper[4792]: E0319 17:03:33.176390 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" podUID="2dceb468-ce3f-4650-ae5e-694664ffb360" Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.565630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" event={"ID":"d14a657c-5e70-4847-9b07-f85ce53d7757","Type":"ContainerStarted","Data":"cd700f7406bcc4d95d41d05c5442174eed130d30f8cedbb2bc7dd105d42316ee"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.567200 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" event={"ID":"74eec49e-2c05-49ce-874b-654ec80018e6","Type":"ContainerStarted","Data":"ca322f5effa11c766676817550d1dbe12c727bc8d51058d710a0b1e620f11195"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.610107 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" event={"ID":"33f808bd-605c-41c7-94fb-92ceab7de0a9","Type":"ContainerStarted","Data":"46b42c4241354908d3fa8953496bd2bb780131c6adf9a062346fee6b28a70abb"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.642051 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" event={"ID":"1ca9378b-68d2-4281-b45a-7f40c30bae7c","Type":"ContainerStarted","Data":"e73748b66552fc3e9e1d0dff170b8d509278c1084ff0c0e089d02c99a7f2144a"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.687006 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" event={"ID":"5458fc2b-b774-488b-a5e0-1f66d2df8bfc","Type":"ContainerStarted","Data":"75b95cc78d49ca4ed9290d6f210a8833cff4b162b04bc7a5be7dd67ad0da8af6"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.694555 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" event={"ID":"d89e09ff-441b-491e-98f7-9bf618322505","Type":"ContainerStarted","Data":"290d08fa7626e332e3dfe982ec3effe6d202fc0a004c52e3931a820fe3658c70"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.696069 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" event={"ID":"6832677c-467f-4786-b2f8-9c999c94f3ba","Type":"ContainerStarted","Data":"0cdf86e9320c5bcd79882aedcd51fadfd351ede9ac05a61fa5d49d2a2b02f270"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.726966 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" event={"ID":"b7f6258a-2ce1-482c-84ee-e869f191cb69","Type":"ContainerStarted","Data":"0176de316e418c1f2d1857ee5fbb3463dd9fa647de1527995b9b00092d7d0fac"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.740773 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" event={"ID":"23c3a809-9d7c-4d60-be1f-2fbc1583e5d6","Type":"ContainerStarted","Data":"509e4f56931c72e1727e5fc6fe051f2d65b59661997f00a8b3e0b9d91e2be255"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.790722 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" event={"ID":"ca8f4495-eabc-425f-82dd-f3c5329de925","Type":"ContainerStarted","Data":"826ada3830e56db9d927a5b8f1f397e27ebbfa6b1d821a338c783377a556be71"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.795927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" event={"ID":"91a44cfc-5acd-4b7c-814c-1521b5e2b85d","Type":"ContainerStarted","Data":"3fb11c1d79c488424bb3f446cc0b12e7f7a138bf5e90590a4cc43226d4f8a6e4"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.800949 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" event={"ID":"2dceb468-ce3f-4650-ae5e-694664ffb360","Type":"ContainerStarted","Data":"bf847ec5f60823951d269b19cb890d9583d84cec1a77a45ebdb2671a9d653340"} Mar 19 17:03:33 crc kubenswrapper[4792]: E0319 17:03:33.804000 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" podUID="2dceb468-ce3f-4650-ae5e-694664ffb360" Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.805309 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" event={"ID":"e4f68cf5-d501-4468-a9a4-b959ae49db87","Type":"ContainerStarted","Data":"77b2ff9d39098d7876a55af7a23e641e5f6f75077115606a6c8332f1f9e3f1b4"} Mar 19 17:03:33 crc kubenswrapper[4792]: I0319 17:03:33.956219 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:33 crc kubenswrapper[4792]: E0319 17:03:33.956380 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:33 crc kubenswrapper[4792]: E0319 17:03:33.956450 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert podName:ae024059-6924-482c-88b6-c845e6932026 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:37.95643375 +0000 UTC m=+1381.102491290 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert") pod "infra-operator-controller-manager-7b9c774f96-p22vv" (UID: "ae024059-6924-482c-88b6-c845e6932026") : secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:34 crc kubenswrapper[4792]: I0319 17:03:34.160005 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:34 crc kubenswrapper[4792]: E0319 17:03:34.160285 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:34 crc kubenswrapper[4792]: E0319 17:03:34.162159 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert podName:29107ce9-41d6-410b-b256-723555fd6169 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:38.162127904 +0000 UTC m=+1381.308185444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" (UID: "29107ce9-41d6-410b-b256-723555fd6169") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:34 crc kubenswrapper[4792]: I0319 17:03:34.774996 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:34 crc kubenswrapper[4792]: I0319 17:03:34.775180 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:34 crc kubenswrapper[4792]: E0319 17:03:34.775270 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 17:03:34 crc kubenswrapper[4792]: E0319 17:03:34.775344 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:38.775326513 +0000 UTC m=+1381.921384053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "webhook-server-cert" not found Mar 19 17:03:34 crc kubenswrapper[4792]: E0319 17:03:34.775389 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 17:03:34 crc kubenswrapper[4792]: E0319 17:03:34.775449 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:38.775431705 +0000 UTC m=+1381.921489335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "metrics-server-cert" not found Mar 19 17:03:34 crc kubenswrapper[4792]: E0319 17:03:34.831195 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" podUID="2dceb468-ce3f-4650-ae5e-694664ffb360" Mar 19 17:03:37 crc kubenswrapper[4792]: I0319 17:03:37.958872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:37 crc kubenswrapper[4792]: E0319 17:03:37.959054 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:37 crc kubenswrapper[4792]: E0319 17:03:37.959471 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert podName:ae024059-6924-482c-88b6-c845e6932026 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:45.959454284 +0000 UTC m=+1389.105511824 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert") pod "infra-operator-controller-manager-7b9c774f96-p22vv" (UID: "ae024059-6924-482c-88b6-c845e6932026") : secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:38 crc kubenswrapper[4792]: I0319 17:03:38.166347 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:38 crc kubenswrapper[4792]: E0319 17:03:38.166562 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:38 crc kubenswrapper[4792]: E0319 17:03:38.166627 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert podName:29107ce9-41d6-410b-b256-723555fd6169 nodeName:}" failed. No retries permitted until 2026-03-19 17:03:46.166608909 +0000 UTC m=+1389.312666449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" (UID: "29107ce9-41d6-410b-b256-723555fd6169") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:38 crc kubenswrapper[4792]: I0319 17:03:38.877005 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:38 crc kubenswrapper[4792]: I0319 17:03:38.877168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:38 crc kubenswrapper[4792]: E0319 17:03:38.877232 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 17:03:38 crc kubenswrapper[4792]: E0319 17:03:38.877300 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:46.877283103 +0000 UTC m=+1390.023340643 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "metrics-server-cert" not found Mar 19 17:03:38 crc kubenswrapper[4792]: E0319 17:03:38.877351 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 17:03:38 crc kubenswrapper[4792]: E0319 17:03:38.877461 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:03:46.877443078 +0000 UTC m=+1390.023500688 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "webhook-server-cert" not found Mar 19 17:03:46 crc kubenswrapper[4792]: I0319 17:03:46.009404 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.009557 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.010112 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert podName:ae024059-6924-482c-88b6-c845e6932026 nodeName:}" failed. No retries permitted until 2026-03-19 17:04:02.010093621 +0000 UTC m=+1405.156151161 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert") pod "infra-operator-controller-manager-7b9c774f96-p22vv" (UID: "ae024059-6924-482c-88b6-c845e6932026") : secret "infra-operator-webhook-server-cert" not found Mar 19 17:03:46 crc kubenswrapper[4792]: I0319 17:03:46.217280 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.217741 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.217800 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert podName:29107ce9-41d6-410b-b256-723555fd6169 nodeName:}" failed. No retries permitted until 2026-03-19 17:04:02.21778619 +0000 UTC m=+1405.363843730 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" (UID: "29107ce9-41d6-410b-b256-723555fd6169") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.785145 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8" Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.785734 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbn4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6f787dddc9-lkhgd_openstack-operators(b7f6258a-2ce1-482c-84ee-e869f191cb69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.786929 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" podUID="b7f6258a-2ce1-482c-84ee-e869f191cb69" Mar 19 17:03:46 crc kubenswrapper[4792]: I0319 17:03:46.929467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:46 crc kubenswrapper[4792]: I0319 17:03:46.929580 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.929828 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 17:03:46 crc kubenswrapper[4792]: E0319 17:03:46.929896 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs podName:8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f nodeName:}" failed. No retries permitted until 2026-03-19 17:04:02.929881403 +0000 UTC m=+1406.075938943 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs") pod "openstack-operator-controller-manager-6c7d9f85c5-9nrsb" (UID: "8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f") : secret "metrics-server-cert" not found Mar 19 17:03:46 crc kubenswrapper[4792]: I0319 17:03:46.978857 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-webhook-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:03:47 crc kubenswrapper[4792]: E0319 17:03:47.000290 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" podUID="b7f6258a-2ce1-482c-84ee-e869f191cb69" Mar 19 17:03:48 crc kubenswrapper[4792]: E0319 17:03:48.381673 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" Mar 19 17:03:48 crc kubenswrapper[4792]: E0319 17:03:48.381902 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnzwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-767865f676-mdbhz_openstack-operators(d89e09ff-441b-491e-98f7-9bf618322505): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:48 crc kubenswrapper[4792]: E0319 17:03:48.383082 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" podUID="d89e09ff-441b-491e-98f7-9bf618322505" Mar 19 17:03:48 crc kubenswrapper[4792]: E0319 17:03:48.882064 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" Mar 19 17:03:48 crc kubenswrapper[4792]: E0319 17:03:48.882239 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q2vfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67ccfc9778-b66p7_openstack-operators(ca8f4495-eabc-425f-82dd-f3c5329de925): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:48 crc kubenswrapper[4792]: E0319 17:03:48.883417 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" podUID="ca8f4495-eabc-425f-82dd-f3c5329de925" Mar 19 17:03:49 crc kubenswrapper[4792]: E0319 17:03:49.010285 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" podUID="ca8f4495-eabc-425f-82dd-f3c5329de925" Mar 19 17:03:49 crc kubenswrapper[4792]: E0319 17:03:49.010862 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" podUID="d89e09ff-441b-491e-98f7-9bf618322505" Mar 19 17:03:50 crc kubenswrapper[4792]: I0319 17:03:50.231401 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:03:50 crc kubenswrapper[4792]: I0319 17:03:50.231686 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:03:50 crc kubenswrapper[4792]: I0319 17:03:50.231734 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:03:50 crc kubenswrapper[4792]: I0319 17:03:50.232317 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"068a73beae621ae4f956b367fc3282b83e72642257a902caff5addac077ed9f3"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:03:50 crc kubenswrapper[4792]: I0319 17:03:50.232380 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://068a73beae621ae4f956b367fc3282b83e72642257a902caff5addac077ed9f3" gracePeriod=600 Mar 19 17:03:50 crc kubenswrapper[4792]: E0319 17:03:50.569168 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" Mar 19 17:03:50 crc kubenswrapper[4792]: E0319 17:03:50.569852 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqgbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5784578c99-gkg4f_openstack-operators(6832677c-467f-4786-b2f8-9c999c94f3ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:50 crc kubenswrapper[4792]: E0319 17:03:50.571217 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" podUID="6832677c-467f-4786-b2f8-9c999c94f3ba" Mar 19 17:03:51 crc kubenswrapper[4792]: I0319 17:03:51.028250 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="068a73beae621ae4f956b367fc3282b83e72642257a902caff5addac077ed9f3" exitCode=0 Mar 19 17:03:51 crc kubenswrapper[4792]: I0319 17:03:51.028444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"068a73beae621ae4f956b367fc3282b83e72642257a902caff5addac077ed9f3"} Mar 19 17:03:51 crc kubenswrapper[4792]: I0319 17:03:51.028526 4792 scope.go:117] "RemoveContainer" containerID="9ca4cbbd386f8a652ca27c6ccc22b2819570a7d2eee2b0dd08a6bf2c10bbac27" Mar 19 17:03:51 crc kubenswrapper[4792]: E0319 17:03:51.029513 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" podUID="6832677c-467f-4786-b2f8-9c999c94f3ba" Mar 19 17:03:51 crc kubenswrapper[4792]: E0319 17:03:51.157072 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" Mar 19 17:03:51 crc kubenswrapper[4792]: E0319 17:03:51.157316 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gz5g8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-8464cc45fb-zkx8w_openstack-operators(bce0486f-f235-464e-acd7-bc8da076eebe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:51 crc kubenswrapper[4792]: E0319 17:03:51.158626 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" podUID="bce0486f-f235-464e-acd7-bc8da076eebe" Mar 19 17:03:51 crc kubenswrapper[4792]: E0319 17:03:51.671133 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" Mar 19 17:03:51 crc kubenswrapper[4792]: E0319 17:03:51.671437 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m4jlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d58dc466-rd29l_openstack-operators(a1ed7ec7-1763-4593-a115-448e7da65482): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:51 crc kubenswrapper[4792]: E0319 17:03:51.672699 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" podUID="a1ed7ec7-1763-4593-a115-448e7da65482" Mar 19 17:03:52 crc kubenswrapper[4792]: E0319 17:03:52.043920 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" podUID="a1ed7ec7-1763-4593-a115-448e7da65482" Mar 19 17:03:52 crc kubenswrapper[4792]: E0319 17:03:52.043918 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" podUID="bce0486f-f235-464e-acd7-bc8da076eebe" Mar 19 17:03:52 crc kubenswrapper[4792]: E0319 17:03:52.366726 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" Mar 19 17:03:52 crc kubenswrapper[4792]: E0319 17:03:52.366916 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjth5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5b9f45d989-mt22x_openstack-operators(74eec49e-2c05-49ce-874b-654ec80018e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:52 crc kubenswrapper[4792]: E0319 17:03:52.368097 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" podUID="74eec49e-2c05-49ce-874b-654ec80018e6" Mar 19 17:03:53 crc kubenswrapper[4792]: E0319 17:03:53.060881 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" podUID="74eec49e-2c05-49ce-874b-654ec80018e6" Mar 19 17:03:53 crc kubenswrapper[4792]: E0319 17:03:53.145060 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" Mar 19 17:03:53 crc kubenswrapper[4792]: E0319 17:03:53.145283 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hjcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5d488d59fb-dz5pk_openstack-operators(33f808bd-605c-41c7-94fb-92ceab7de0a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:53 crc kubenswrapper[4792]: E0319 17:03:53.147319 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" podUID="33f808bd-605c-41c7-94fb-92ceab7de0a9" Mar 19 17:03:53 crc kubenswrapper[4792]: E0319 17:03:53.594637 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" Mar 19 17:03:53 crc kubenswrapper[4792]: E0319 17:03:53.595168 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qpz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-768b96df4c-s2pjr_openstack-operators(80afdbc0-ff4c-4806-884d-ef3542b4de9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:53 crc kubenswrapper[4792]: E0319 17:03:53.596504 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" podUID="80afdbc0-ff4c-4806-884d-ef3542b4de9c" Mar 19 17:03:54 crc kubenswrapper[4792]: E0319 17:03:54.073231 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" podUID="80afdbc0-ff4c-4806-884d-ef3542b4de9c" Mar 19 17:03:54 crc kubenswrapper[4792]: E0319 17:03:54.073292 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" podUID="33f808bd-605c-41c7-94fb-92ceab7de0a9" Mar 19 17:03:54 crc kubenswrapper[4792]: E0319 17:03:54.091527 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" Mar 19 17:03:54 crc kubenswrapper[4792]: E0319 17:03:54.091934 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdmzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6c4d75f7f9-rg6qq_openstack-operators(1ca9378b-68d2-4281-b45a-7f40c30bae7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:54 crc kubenswrapper[4792]: E0319 17:03:54.093524 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" podUID="1ca9378b-68d2-4281-b45a-7f40c30bae7c" Mar 19 17:03:54 crc kubenswrapper[4792]: E0319 17:03:54.547913 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 19 17:03:54 crc kubenswrapper[4792]: E0319 17:03:54.548171 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l74x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-x2pbv_openstack-operators(5458fc2b-b774-488b-a5e0-1f66d2df8bfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:03:54 crc kubenswrapper[4792]: E0319 17:03:54.549453 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" podUID="5458fc2b-b774-488b-a5e0-1f66d2df8bfc" Mar 19 17:03:55 crc kubenswrapper[4792]: E0319 17:03:55.083307 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" podUID="5458fc2b-b774-488b-a5e0-1f66d2df8bfc" Mar 19 17:03:55 crc kubenswrapper[4792]: E0319 17:03:55.083331 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" podUID="1ca9378b-68d2-4281-b45a-7f40c30bae7c" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.090165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" event={"ID":"d14a657c-5e70-4847-9b07-f85ce53d7757","Type":"ContainerStarted","Data":"cf20c27984a4051a9e7ff14a87447f555bcc82221088be414f55c94df4b7ace0"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.090750 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.091638 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" event={"ID":"29961080-94d4-4275-8d1a-baf1405cf2bb","Type":"ContainerStarted","Data":"88466ab458f8f8248894efb07ba7ad684664c376a0b273aa75052608fb3479fc"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.091734 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.093753 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.096919 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" event={"ID":"91a44cfc-5acd-4b7c-814c-1521b5e2b85d","Type":"ContainerStarted","Data":"76b7e103214a3f45ba20cbdcd9f1da26d282154e7d0345de7c72a7667cb5c3b2"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.096984 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.098141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" event={"ID":"e4f68cf5-d501-4468-a9a4-b959ae49db87","Type":"ContainerStarted","Data":"ee446cb4a5f43bbf55d76bbd02df00b353766d94b37a8db2d95d6f0b1de2e50f"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.098318 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.099638 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" event={"ID":"c82a8813-bf57-4e7c-88fb-34b0ebee51be","Type":"ContainerStarted","Data":"d7bd6b89c82f0185581f04d6ecac175261f0f3fc9bcb2ed7a683ef0c9004bd2a"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.099755 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.101162 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" event={"ID":"335bce01-df52-41ca-b47a-daa5e8ac917e","Type":"ContainerStarted","Data":"917c728f5ab31a81f52830af102e6a659c5efe73c09e20a849344000a6b8f31a"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.101276 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.103082 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" event={"ID":"9bb5702e-9617-4fb3-a13b-32aa8f7209bc","Type":"ContainerStarted","Data":"fa2a16194f1ce86f3c91ef94a83d7f6a487c2cd289c588b98044fdfb840d13a7"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.103207 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.104625 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" event={"ID":"2dceb468-ce3f-4650-ae5e-694664ffb360","Type":"ContainerStarted","Data":"7764b90490bad903940ae7b736de1f16ccd343fd2f7d3d6ac52569790baa2ccf"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.104883 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.105949 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" event={"ID":"23c3a809-9d7c-4d60-be1f-2fbc1583e5d6","Type":"ContainerStarted","Data":"b5c90e445d1123d301146b9ff2e1e178ffb62c3e6a46118372d1f28396ccb369"} Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.106148 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.134272 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" podStartSLOduration=4.781586437 podStartE2EDuration="27.134254827s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:32.656616023 +0000 UTC m=+1375.802673563" lastFinishedPulling="2026-03-19 17:03:55.009284413 +0000 UTC m=+1398.155341953" observedRunningTime="2026-03-19 17:03:56.131221094 +0000 UTC m=+1399.277278664" watchObservedRunningTime="2026-03-19 17:03:56.134254827 +0000 UTC m=+1399.280312367" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.257812 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" podStartSLOduration=3.871437094 podStartE2EDuration="26.257797983s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:32.608342857 +0000 UTC m=+1375.754400397" lastFinishedPulling="2026-03-19 17:03:54.994703746 +0000 UTC m=+1398.140761286" observedRunningTime="2026-03-19 17:03:56.248041157 +0000 UTC m=+1399.394098717" watchObservedRunningTime="2026-03-19 17:03:56.257797983 +0000 UTC m=+1399.403855523" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.261363 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" podStartSLOduration=4.433701424 podStartE2EDuration="26.261349679s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:33.168019057 +0000 UTC m=+1376.314076597" lastFinishedPulling="2026-03-19 17:03:54.995667312 +0000 UTC m=+1398.141724852" observedRunningTime="2026-03-19 17:03:56.216518548 +0000 UTC m=+1399.362576088" watchObservedRunningTime="2026-03-19 17:03:56.261349679 +0000 UTC m=+1399.407407219" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.320137 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" podStartSLOduration=4.112591296 podStartE2EDuration="27.320117281s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:31.292551943 +0000 UTC m=+1374.438609483" lastFinishedPulling="2026-03-19 17:03:54.500077928 +0000 UTC m=+1397.646135468" observedRunningTime="2026-03-19 17:03:56.307387624 +0000 UTC m=+1399.453445164" watchObservedRunningTime="2026-03-19 17:03:56.320117281 +0000 UTC m=+1399.466174811" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.360592 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" podStartSLOduration=3.95975741 podStartE2EDuration="26.360577773s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:32.608009158 +0000 UTC m=+1375.754066698" lastFinishedPulling="2026-03-19 17:03:55.008829521 +0000 UTC m=+1398.154887061" observedRunningTime="2026-03-19 17:03:56.358866647 +0000 UTC m=+1399.504924177" watchObservedRunningTime="2026-03-19 17:03:56.360577773 +0000 UTC m=+1399.506635313" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.428080 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" podStartSLOduration=3.8182964569999998 podStartE2EDuration="27.428059742s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:30.89091964 +0000 UTC m=+1374.036977180" lastFinishedPulling="2026-03-19 17:03:54.500682925 +0000 UTC m=+1397.646740465" observedRunningTime="2026-03-19 17:03:56.400385369 +0000 UTC m=+1399.546442899" watchObservedRunningTime="2026-03-19 17:03:56.428059742 +0000 UTC m=+1399.574117282" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.439166 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" podStartSLOduration=4.036275427 podStartE2EDuration="27.439148734s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:31.591911301 +0000 UTC m=+1374.737968841" lastFinishedPulling="2026-03-19 17:03:54.994784608 +0000 UTC m=+1398.140842148" observedRunningTime="2026-03-19 17:03:56.436094951 +0000 UTC m=+1399.582152491" watchObservedRunningTime="2026-03-19 17:03:56.439148734 +0000 UTC m=+1399.585206274" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.468956 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" podStartSLOduration=4.297492735 podStartE2EDuration="27.468939977s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:31.823350737 +0000 UTC m=+1374.969408277" lastFinishedPulling="2026-03-19 17:03:54.994797979 +0000 UTC m=+1398.140855519" observedRunningTime="2026-03-19 17:03:56.466272234 +0000 UTC m=+1399.612329774" watchObservedRunningTime="2026-03-19 17:03:56.468939977 +0000 UTC m=+1399.614997517" Mar 19 17:03:56 crc kubenswrapper[4792]: I0319 17:03:56.490174 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" podStartSLOduration=4.597425486 podStartE2EDuration="26.490160105s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:33.174282787 +0000 UTC m=+1376.320340327" lastFinishedPulling="2026-03-19 17:03:55.067017406 +0000 UTC m=+1398.213074946" observedRunningTime="2026-03-19 17:03:56.487995425 +0000 UTC m=+1399.634052965" watchObservedRunningTime="2026-03-19 17:03:56.490160105 +0000 UTC m=+1399.636217645" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.131594 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565664-j9rd2"] Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.133536 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.138613 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.138800 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.138960 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.144005 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565664-j9rd2"] Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.160149 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.187799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbxkb\" (UniqueName: \"kubernetes.io/projected/fe46a20a-3d00-4840-b3f0-08a10149eefd-kube-api-access-mbxkb\") pod \"auto-csr-approver-29565664-j9rd2\" (UID: \"fe46a20a-3d00-4840-b3f0-08a10149eefd\") " pod="openshift-infra/auto-csr-approver-29565664-j9rd2" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.252930 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.276066 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.289286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbxkb\" (UniqueName: \"kubernetes.io/projected/fe46a20a-3d00-4840-b3f0-08a10149eefd-kube-api-access-mbxkb\") pod \"auto-csr-approver-29565664-j9rd2\" (UID: \"fe46a20a-3d00-4840-b3f0-08a10149eefd\") " pod="openshift-infra/auto-csr-approver-29565664-j9rd2" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.311825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbxkb\" (UniqueName: \"kubernetes.io/projected/fe46a20a-3d00-4840-b3f0-08a10149eefd-kube-api-access-mbxkb\") pod \"auto-csr-approver-29565664-j9rd2\" (UID: \"fe46a20a-3d00-4840-b3f0-08a10149eefd\") " pod="openshift-infra/auto-csr-approver-29565664-j9rd2" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.327561 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.466261 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.614563 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.794795 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" Mar 19 17:04:00 crc kubenswrapper[4792]: I0319 17:04:00.969227 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565664-j9rd2"] Mar 19 17:04:01 crc kubenswrapper[4792]: I0319 17:04:01.039296 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" Mar 19 17:04:01 crc kubenswrapper[4792]: I0319 17:04:01.098560 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 17:04:01 crc kubenswrapper[4792]: I0319 17:04:01.137099 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 17:04:01 crc kubenswrapper[4792]: I0319 17:04:01.146612 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" event={"ID":"b7f6258a-2ce1-482c-84ee-e869f191cb69","Type":"ContainerStarted","Data":"c28aa1dc1d29662f1f56ad32a8d85849eb111f42af8515ead8df7f1b6043a7a0"} Mar 19 17:04:01 crc kubenswrapper[4792]: I0319 17:04:01.146804 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 17:04:01 crc kubenswrapper[4792]: I0319 17:04:01.147575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" event={"ID":"fe46a20a-3d00-4840-b3f0-08a10149eefd","Type":"ContainerStarted","Data":"172f07b3d35ab5fa9f46a0db524d3b1f37b3dc7cb206f62df19e12ed29fc01f2"} Mar 19 17:04:01 crc kubenswrapper[4792]: I0319 17:04:01.187186 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" podStartSLOduration=4.295980714 podStartE2EDuration="32.18716823s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:32.628958979 +0000 UTC m=+1375.775016509" lastFinishedPulling="2026-03-19 17:04:00.520146485 +0000 UTC m=+1403.666204025" observedRunningTime="2026-03-19 17:04:01.179055299 +0000 UTC m=+1404.325112839" watchObservedRunningTime="2026-03-19 17:04:01.18716823 +0000 UTC m=+1404.333225770" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.027810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.055391 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae024059-6924-482c-88b6-c845e6932026-cert\") pod \"infra-operator-controller-manager-7b9c774f96-p22vv\" (UID: \"ae024059-6924-482c-88b6-c845e6932026\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.157022 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" event={"ID":"ca8f4495-eabc-425f-82dd-f3c5329de925","Type":"ContainerStarted","Data":"a8becef1a7bf689d6527cc452871fa526c818a8c36ca6bfe9d324630bb82ecf5"} Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.157373 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.176561 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" podStartSLOduration=4.494857822 podStartE2EDuration="33.176545019s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:32.618688058 +0000 UTC m=+1375.764745588" lastFinishedPulling="2026-03-19 17:04:01.300375245 +0000 UTC m=+1404.446432785" observedRunningTime="2026-03-19 17:04:02.171300256 +0000 UTC m=+1405.317357806" watchObservedRunningTime="2026-03-19 17:04:02.176545019 +0000 UTC m=+1405.322602549" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.224170 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-fsnst" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.231615 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.232232 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.237061 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29107ce9-41d6-410b-b256-723555fd6169-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-lhq2p\" (UID: \"29107ce9-41d6-410b-b256-723555fd6169\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.247403 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2sl2n" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.255446 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.678877 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv"] Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.810401 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p"] Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.945009 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:04:02 crc kubenswrapper[4792]: I0319 17:04:02.964287 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f-metrics-certs\") pod \"openstack-operator-controller-manager-6c7d9f85c5-9nrsb\" (UID: \"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f\") " pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:04:03 crc kubenswrapper[4792]: I0319 17:04:03.060270 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qknw2" Mar 19 17:04:03 crc kubenswrapper[4792]: I0319 17:04:03.066920 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:04:03 crc kubenswrapper[4792]: I0319 17:04:03.186967 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" event={"ID":"29107ce9-41d6-410b-b256-723555fd6169","Type":"ContainerStarted","Data":"c997e5faa0c7810604d1b9556b6010c38d5823777f078cf4becff5eb6b62f04c"} Mar 19 17:04:03 crc kubenswrapper[4792]: I0319 17:04:03.191592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" event={"ID":"ae024059-6924-482c-88b6-c845e6932026","Type":"ContainerStarted","Data":"1391cf577c5fb1baeb9a985a2fc5069d6f97fca46e08c0a30278b76862f40c11"} Mar 19 17:04:04 crc kubenswrapper[4792]: I0319 17:04:03.576727 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb"] Mar 19 17:04:04 crc kubenswrapper[4792]: W0319 17:04:03.593391 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da4bcd6_1b9f_450c_9ed2_34bd70bc6a8f.slice/crio-8d3db1d35cc1cfe4beb1588ea43b78348346b11d4513adb59c1e867caf79eb3e WatchSource:0}: Error finding container 8d3db1d35cc1cfe4beb1588ea43b78348346b11d4513adb59c1e867caf79eb3e: Status 404 returned error can't find the container with id 8d3db1d35cc1cfe4beb1588ea43b78348346b11d4513adb59c1e867caf79eb3e Mar 19 17:04:04 crc kubenswrapper[4792]: I0319 17:04:04.205780 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" event={"ID":"fe46a20a-3d00-4840-b3f0-08a10149eefd","Type":"ContainerStarted","Data":"9be402fd0a2ac903bcc6f1c090a28e25b7ae33423ae488bf771ec2dd01bf9ca1"} Mar 19 17:04:04 crc kubenswrapper[4792]: I0319 17:04:04.207870 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" event={"ID":"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f","Type":"ContainerStarted","Data":"e7a7570b52a5085a2f2e0e442b5d8bd04be03e1119f78b002cdbd17216d838b9"} Mar 19 17:04:04 crc kubenswrapper[4792]: I0319 17:04:04.207898 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" event={"ID":"8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f","Type":"ContainerStarted","Data":"8d3db1d35cc1cfe4beb1588ea43b78348346b11d4513adb59c1e867caf79eb3e"} Mar 19 17:04:04 crc kubenswrapper[4792]: I0319 17:04:04.208035 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:04:04 crc kubenswrapper[4792]: I0319 17:04:04.230492 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" podStartSLOduration=1.863675174 podStartE2EDuration="4.230474525s" podCreationTimestamp="2026-03-19 17:04:00 +0000 UTC" firstStartedPulling="2026-03-19 17:04:00.973612971 +0000 UTC m=+1404.119670511" lastFinishedPulling="2026-03-19 17:04:03.340412322 +0000 UTC m=+1406.486469862" observedRunningTime="2026-03-19 17:04:04.226273861 +0000 UTC m=+1407.372331421" watchObservedRunningTime="2026-03-19 17:04:04.230474525 +0000 UTC m=+1407.376532065" Mar 19 17:04:04 crc kubenswrapper[4792]: I0319 17:04:04.255562 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" podStartSLOduration=34.255542878 podStartE2EDuration="34.255542878s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:04:04.250537642 +0000 UTC m=+1407.396595182" watchObservedRunningTime="2026-03-19 17:04:04.255542878 +0000 UTC m=+1407.401600418" Mar 19 17:04:05 crc kubenswrapper[4792]: I0319 17:04:05.234385 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" event={"ID":"d89e09ff-441b-491e-98f7-9bf618322505","Type":"ContainerStarted","Data":"ae2ca26448e4faa88ddbee76e6365c56529215a2b60b40c2008823b0dbdfd92e"} Mar 19 17:04:05 crc kubenswrapper[4792]: I0319 17:04:05.234920 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 17:04:05 crc kubenswrapper[4792]: I0319 17:04:05.237772 4792 generic.go:334] "Generic (PLEG): container finished" podID="fe46a20a-3d00-4840-b3f0-08a10149eefd" containerID="9be402fd0a2ac903bcc6f1c090a28e25b7ae33423ae488bf771ec2dd01bf9ca1" exitCode=0 Mar 19 17:04:05 crc kubenswrapper[4792]: I0319 17:04:05.237800 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" event={"ID":"fe46a20a-3d00-4840-b3f0-08a10149eefd","Type":"ContainerDied","Data":"9be402fd0a2ac903bcc6f1c090a28e25b7ae33423ae488bf771ec2dd01bf9ca1"} Mar 19 17:04:05 crc kubenswrapper[4792]: I0319 17:04:05.255350 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" podStartSLOduration=4.404861271 podStartE2EDuration="36.255332061s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:32.608542212 +0000 UTC m=+1375.754599752" lastFinishedPulling="2026-03-19 17:04:04.459013002 +0000 UTC m=+1407.605070542" observedRunningTime="2026-03-19 17:04:05.248380842 +0000 UTC m=+1408.394438382" watchObservedRunningTime="2026-03-19 17:04:05.255332061 +0000 UTC m=+1408.401389601" Mar 19 17:04:06 crc kubenswrapper[4792]: I0319 17:04:06.722857 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" Mar 19 17:04:06 crc kubenswrapper[4792]: I0319 17:04:06.828437 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbxkb\" (UniqueName: \"kubernetes.io/projected/fe46a20a-3d00-4840-b3f0-08a10149eefd-kube-api-access-mbxkb\") pod \"fe46a20a-3d00-4840-b3f0-08a10149eefd\" (UID: \"fe46a20a-3d00-4840-b3f0-08a10149eefd\") " Mar 19 17:04:06 crc kubenswrapper[4792]: I0319 17:04:06.833748 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe46a20a-3d00-4840-b3f0-08a10149eefd-kube-api-access-mbxkb" (OuterVolumeSpecName: "kube-api-access-mbxkb") pod "fe46a20a-3d00-4840-b3f0-08a10149eefd" (UID: "fe46a20a-3d00-4840-b3f0-08a10149eefd"). InnerVolumeSpecName "kube-api-access-mbxkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:04:06 crc kubenswrapper[4792]: I0319 17:04:06.936340 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbxkb\" (UniqueName: \"kubernetes.io/projected/fe46a20a-3d00-4840-b3f0-08a10149eefd-kube-api-access-mbxkb\") on node \"crc\" DevicePath \"\"" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.256594 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" event={"ID":"fe46a20a-3d00-4840-b3f0-08a10149eefd","Type":"ContainerDied","Data":"172f07b3d35ab5fa9f46a0db524d3b1f37b3dc7cb206f62df19e12ed29fc01f2"} Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.256979 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="172f07b3d35ab5fa9f46a0db524d3b1f37b3dc7cb206f62df19e12ed29fc01f2" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.256648 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565664-j9rd2" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.258589 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" event={"ID":"6832677c-467f-4786-b2f8-9c999c94f3ba","Type":"ContainerStarted","Data":"a2922ebdc20527ee2adc7fb32c6da04715ba420930761598e8ce53ca17a0ab8d"} Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.258813 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.260260 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" event={"ID":"ae024059-6924-482c-88b6-c845e6932026","Type":"ContainerStarted","Data":"44d7bec563e8172347bd049d3ad857f7f5e7586a8addfa03fbc7a68a57cdaa1d"} Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.260386 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.262127 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" event={"ID":"29107ce9-41d6-410b-b256-723555fd6169","Type":"ContainerStarted","Data":"8f7cd69b8f817133efad67fcd169cf67b2e02bddb0df45d8ec26e6246da2109e"} Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.262197 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.263763 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" event={"ID":"74eec49e-2c05-49ce-874b-654ec80018e6","Type":"ContainerStarted","Data":"d860e960f1fdf5f0b8a0e62056d6b6363e564a8e56fe01c48046f5667be2dc85"} Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.263906 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.265467 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" event={"ID":"1ca9378b-68d2-4281-b45a-7f40c30bae7c","Type":"ContainerStarted","Data":"b2a20bd6a0888feea1a9d26e7fbf09a5020301317a1d54fc2b611dccd8a654dd"} Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.266232 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.287822 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" podStartSLOduration=3.790330494 podStartE2EDuration="37.287807233s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:33.170234707 +0000 UTC m=+1376.316292247" lastFinishedPulling="2026-03-19 17:04:06.667711446 +0000 UTC m=+1409.813768986" observedRunningTime="2026-03-19 17:04:07.28109899 +0000 UTC m=+1410.427156520" watchObservedRunningTime="2026-03-19 17:04:07.287807233 +0000 UTC m=+1410.433864773" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.301507 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" podStartSLOduration=3.298224914 podStartE2EDuration="37.301489425s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:32.672831044 +0000 UTC m=+1375.818888584" lastFinishedPulling="2026-03-19 17:04:06.676095555 +0000 UTC m=+1409.822153095" observedRunningTime="2026-03-19 17:04:07.300614372 +0000 UTC m=+1410.446671932" watchObservedRunningTime="2026-03-19 17:04:07.301489425 +0000 UTC m=+1410.447546965" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.324712 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565658-586dw"] Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.340140 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565658-586dw"] Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.346381 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" podStartSLOduration=33.476622624 podStartE2EDuration="37.346359678s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:04:02.797996883 +0000 UTC m=+1405.944054433" lastFinishedPulling="2026-03-19 17:04:06.667733907 +0000 UTC m=+1409.813791487" observedRunningTime="2026-03-19 17:04:07.336462848 +0000 UTC m=+1410.482520388" watchObservedRunningTime="2026-03-19 17:04:07.346359678 +0000 UTC m=+1410.492417218" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.406082 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" podStartSLOduration=3.880550463 podStartE2EDuration="37.406064425s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:33.142518333 +0000 UTC m=+1376.288575873" lastFinishedPulling="2026-03-19 17:04:06.668032285 +0000 UTC m=+1409.814089835" observedRunningTime="2026-03-19 17:04:07.398671534 +0000 UTC m=+1410.544729074" watchObservedRunningTime="2026-03-19 17:04:07.406064425 +0000 UTC m=+1410.552121965" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.408406 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" podStartSLOduration=34.425111641 podStartE2EDuration="38.408395269s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:04:02.684447799 +0000 UTC m=+1405.830505339" lastFinishedPulling="2026-03-19 17:04:06.667731387 +0000 UTC m=+1409.813788967" observedRunningTime="2026-03-19 17:04:07.367074902 +0000 UTC m=+1410.513132442" watchObservedRunningTime="2026-03-19 17:04:07.408395269 +0000 UTC m=+1410.554452819" Mar 19 17:04:07 crc kubenswrapper[4792]: I0319 17:04:07.750485 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f138b905-2e1e-42a0-a36c-b1a31b9811bd" path="/var/lib/kubelet/pods/f138b905-2e1e-42a0-a36c-b1a31b9811bd/volumes" Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.290792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" event={"ID":"bce0486f-f235-464e-acd7-bc8da076eebe","Type":"ContainerStarted","Data":"e3d0145fcafb25a88478e07c7c5b0c46e673ef1b90683b7f50fac8b4582e345b"} Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.292173 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.293929 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" event={"ID":"33f808bd-605c-41c7-94fb-92ceab7de0a9","Type":"ContainerStarted","Data":"f6ff240a120b8c99ccc961f3f089b9ce2ed6dc79eddff02f3d1a39d052406412"} Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.294693 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.296642 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" event={"ID":"a1ed7ec7-1763-4593-a115-448e7da65482","Type":"ContainerStarted","Data":"cedc795cd260789e6b64643dc727efc71fe1ca7040e5652518444573c982d64a"} Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.296812 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.297959 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" event={"ID":"80afdbc0-ff4c-4806-884d-ef3542b4de9c","Type":"ContainerStarted","Data":"3a4e5734b2463d415828b94a1fb2da7bf4b0abce77e46b5e8a882f4bf666d736"} Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.298817 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.318050 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" podStartSLOduration=3.504001433 podStartE2EDuration="39.318026964s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:31.820254132 +0000 UTC m=+1374.966311662" lastFinishedPulling="2026-03-19 17:04:07.634279653 +0000 UTC m=+1410.780337193" observedRunningTime="2026-03-19 17:04:08.314251222 +0000 UTC m=+1411.460308772" watchObservedRunningTime="2026-03-19 17:04:08.318026964 +0000 UTC m=+1411.464084504" Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.339061 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" podStartSLOduration=3.358525778 podStartE2EDuration="38.339028496s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:32.655580534 +0000 UTC m=+1375.801638074" lastFinishedPulling="2026-03-19 17:04:07.636083252 +0000 UTC m=+1410.782140792" observedRunningTime="2026-03-19 17:04:08.3318191 +0000 UTC m=+1411.477876650" watchObservedRunningTime="2026-03-19 17:04:08.339028496 +0000 UTC m=+1411.485086036" Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.353907 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" podStartSLOduration=3.7760062359999997 podStartE2EDuration="39.353890822s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:31.820280313 +0000 UTC m=+1374.966337853" lastFinishedPulling="2026-03-19 17:04:07.398164899 +0000 UTC m=+1410.544222439" observedRunningTime="2026-03-19 17:04:08.351488736 +0000 UTC m=+1411.497546296" watchObservedRunningTime="2026-03-19 17:04:08.353890822 +0000 UTC m=+1411.499948362" Mar 19 17:04:08 crc kubenswrapper[4792]: I0319 17:04:08.380144 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" podStartSLOduration=3.7668514760000003 podStartE2EDuration="39.380119877s" podCreationTimestamp="2026-03-19 17:03:29 +0000 UTC" firstStartedPulling="2026-03-19 17:03:31.646514378 +0000 UTC m=+1374.792571918" lastFinishedPulling="2026-03-19 17:04:07.259782789 +0000 UTC m=+1410.405840319" observedRunningTime="2026-03-19 17:04:08.375979964 +0000 UTC m=+1411.522037524" watchObservedRunningTime="2026-03-19 17:04:08.380119877 +0000 UTC m=+1411.526177407" Mar 19 17:04:09 crc kubenswrapper[4792]: I0319 17:04:09.310397 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" event={"ID":"5458fc2b-b774-488b-a5e0-1f66d2df8bfc","Type":"ContainerStarted","Data":"a990df01ad57cad606da8b8bc930f16a13f08b4dd69d881b013a740ac725250b"} Mar 19 17:04:09 crc kubenswrapper[4792]: I0319 17:04:09.333249 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-x2pbv" podStartSLOduration=4.048298543 podStartE2EDuration="39.333229577s" podCreationTimestamp="2026-03-19 17:03:30 +0000 UTC" firstStartedPulling="2026-03-19 17:03:33.167471242 +0000 UTC m=+1376.313528782" lastFinishedPulling="2026-03-19 17:04:08.452402286 +0000 UTC m=+1411.598459816" observedRunningTime="2026-03-19 17:04:09.328408256 +0000 UTC m=+1412.474465796" watchObservedRunningTime="2026-03-19 17:04:09.333229577 +0000 UTC m=+1412.479287117" Mar 19 17:04:10 crc kubenswrapper[4792]: I0319 17:04:10.522442 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 17:04:10 crc kubenswrapper[4792]: I0319 17:04:10.627891 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 17:04:10 crc kubenswrapper[4792]: I0319 17:04:10.645668 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 17:04:11 crc kubenswrapper[4792]: I0319 17:04:11.231105 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 17:04:12 crc kubenswrapper[4792]: I0319 17:04:12.240510 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 17:04:12 crc kubenswrapper[4792]: I0319 17:04:12.271711 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 17:04:13 crc kubenswrapper[4792]: I0319 17:04:13.074879 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 17:04:20 crc kubenswrapper[4792]: I0319 17:04:20.198829 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" Mar 19 17:04:20 crc kubenswrapper[4792]: I0319 17:04:20.421535 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 17:04:20 crc kubenswrapper[4792]: I0319 17:04:20.593674 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" Mar 19 17:04:20 crc kubenswrapper[4792]: I0319 17:04:20.653897 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 17:04:20 crc kubenswrapper[4792]: I0319 17:04:20.685895 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" Mar 19 17:04:20 crc kubenswrapper[4792]: I0319 17:04:20.916851 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.345230 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvvsx"] Mar 19 17:04:37 crc kubenswrapper[4792]: E0319 17:04:37.346158 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe46a20a-3d00-4840-b3f0-08a10149eefd" containerName="oc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.346172 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe46a20a-3d00-4840-b3f0-08a10149eefd" containerName="oc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.346359 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe46a20a-3d00-4840-b3f0-08a10149eefd" containerName="oc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.347675 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.352556 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvvsx"] Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.352790 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.354465 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qr6hf" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.354498 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.354612 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.453950 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56ppc"] Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.457570 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.463789 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.470296 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56ppc"] Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.489683 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f97023d-3238-45c3-9409-ecb03a63f844-config\") pod \"dnsmasq-dns-675f4bcbfc-dvvsx\" (UID: \"9f97023d-3238-45c3-9409-ecb03a63f844\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.490006 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2bm\" (UniqueName: \"kubernetes.io/projected/9f97023d-3238-45c3-9409-ecb03a63f844-kube-api-access-xc2bm\") pod \"dnsmasq-dns-675f4bcbfc-dvvsx\" (UID: \"9f97023d-3238-45c3-9409-ecb03a63f844\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.591240 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f97023d-3238-45c3-9409-ecb03a63f844-config\") pod \"dnsmasq-dns-675f4bcbfc-dvvsx\" (UID: \"9f97023d-3238-45c3-9409-ecb03a63f844\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.591305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.591438 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-config\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.591511 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fffk\" (UniqueName: \"kubernetes.io/projected/fcacd38a-1e90-49d1-8327-cc0c143b1e24-kube-api-access-5fffk\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.591593 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2bm\" (UniqueName: \"kubernetes.io/projected/9f97023d-3238-45c3-9409-ecb03a63f844-kube-api-access-xc2bm\") pod \"dnsmasq-dns-675f4bcbfc-dvvsx\" (UID: \"9f97023d-3238-45c3-9409-ecb03a63f844\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.592156 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f97023d-3238-45c3-9409-ecb03a63f844-config\") pod \"dnsmasq-dns-675f4bcbfc-dvvsx\" (UID: \"9f97023d-3238-45c3-9409-ecb03a63f844\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.609721 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2bm\" (UniqueName: \"kubernetes.io/projected/9f97023d-3238-45c3-9409-ecb03a63f844-kube-api-access-xc2bm\") pod \"dnsmasq-dns-675f4bcbfc-dvvsx\" (UID: \"9f97023d-3238-45c3-9409-ecb03a63f844\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.667300 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.692718 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.692797 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-config\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.692850 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fffk\" (UniqueName: \"kubernetes.io/projected/fcacd38a-1e90-49d1-8327-cc0c143b1e24-kube-api-access-5fffk\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.693641 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-config\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.695510 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.703828 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.713650 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fffk\" (UniqueName: \"kubernetes.io/projected/fcacd38a-1e90-49d1-8327-cc0c143b1e24-kube-api-access-5fffk\") pod \"dnsmasq-dns-78dd6ddcc-56ppc\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:37 crc kubenswrapper[4792]: I0319 17:04:37.783640 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:04:38 crc kubenswrapper[4792]: I0319 17:04:38.144306 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvvsx"] Mar 19 17:04:38 crc kubenswrapper[4792]: I0319 17:04:38.303855 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56ppc"] Mar 19 17:04:38 crc kubenswrapper[4792]: W0319 17:04:38.304504 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcacd38a_1e90_49d1_8327_cc0c143b1e24.slice/crio-22f21c60b6ac0730b60df583b22663548c5969445169df53f1b9fb0653ef255b WatchSource:0}: Error finding container 22f21c60b6ac0730b60df583b22663548c5969445169df53f1b9fb0653ef255b: Status 404 returned error can't find the container with id 22f21c60b6ac0730b60df583b22663548c5969445169df53f1b9fb0653ef255b Mar 19 17:04:38 crc kubenswrapper[4792]: I0319 17:04:38.544126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" event={"ID":"fcacd38a-1e90-49d1-8327-cc0c143b1e24","Type":"ContainerStarted","Data":"22f21c60b6ac0730b60df583b22663548c5969445169df53f1b9fb0653ef255b"} Mar 19 17:04:38 crc kubenswrapper[4792]: I0319 17:04:38.545262 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" event={"ID":"9f97023d-3238-45c3-9409-ecb03a63f844","Type":"ContainerStarted","Data":"38daeb2f7a97a12120f0102b222b8229d883cc856daa48a3ef85ec292d47e0bd"} Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.051985 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvvsx"] Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.082763 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9sjzh"] Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.084829 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.090404 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9sjzh"] Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.246501 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4m7\" (UniqueName: \"kubernetes.io/projected/2aaf8c82-efe8-424c-8e61-e0c418980262-kube-api-access-zh4m7\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.246894 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-config\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.246930 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.259153 4792 scope.go:117] "RemoveContainer" containerID="3244184b4cbceafcb012d54a191bea8048c5c6bebc6ccfca7cf7f7061ab3cc7f" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.348052 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4m7\" (UniqueName: \"kubernetes.io/projected/2aaf8c82-efe8-424c-8e61-e0c418980262-kube-api-access-zh4m7\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.348192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-config\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.348351 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.349377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-config\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.352551 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.383777 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4m7\" (UniqueName: \"kubernetes.io/projected/2aaf8c82-efe8-424c-8e61-e0c418980262-kube-api-access-zh4m7\") pod \"dnsmasq-dns-666b6646f7-9sjzh\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.390334 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56ppc"] Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.427916 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qdtgt"] Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.429270 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.432224 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.460336 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qdtgt"] Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.555928 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.556010 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-config\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.556090 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhvdl\" (UniqueName: \"kubernetes.io/projected/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-kube-api-access-zhvdl\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.659028 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.659468 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-config\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.659676 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhvdl\" (UniqueName: \"kubernetes.io/projected/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-kube-api-access-zhvdl\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.660474 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-config\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.661021 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.688658 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhvdl\" (UniqueName: \"kubernetes.io/projected/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-kube-api-access-zhvdl\") pod \"dnsmasq-dns-57d769cc4f-qdtgt\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:40 crc kubenswrapper[4792]: I0319 17:04:40.754672 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.060315 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9sjzh"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.238047 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.239814 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.254233 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.254292 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.256119 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.260000 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.260341 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.260465 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.260600 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jdpwn" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.260738 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.260952 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.261063 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.299317 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.302154 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.307057 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.332910 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373084 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-config-data\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373145 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-config-data\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373164 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373183 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373202 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae950307-1857-4a46-ab98-55843387f128-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373216 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373259 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373279 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373335 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373362 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dst7m\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-kube-api-access-dst7m\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373379 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8kk\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-kube-api-access-fq8kk\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373397 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373414 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373432 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373451 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373475 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373495 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae950307-1857-4a46-ab98-55843387f128-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373515 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373532 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zkdp\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-kube-api-access-5zkdp\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373548 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373565 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373579 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373598 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373633 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373663 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373681 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373698 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-config-data\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373715 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373760 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.373790 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.376721 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qdtgt"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475664 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475685 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-config-data\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475705 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475738 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475813 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-config-data\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475828 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475859 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-config-data\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475891 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475906 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae950307-1857-4a46-ab98-55843387f128-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475920 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.475994 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476017 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dst7m\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-kube-api-access-dst7m\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476034 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq8kk\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-kube-api-access-fq8kk\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476051 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476065 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476082 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476118 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476137 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae950307-1857-4a46-ab98-55843387f128-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476156 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476171 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zkdp\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-kube-api-access-5zkdp\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476188 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476220 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476253 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.476757 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.478996 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-config-data\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.479371 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.479748 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.481712 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.482096 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.482884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.482957 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.483170 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.483631 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-config-data\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.483937 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.484734 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-config-data\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.485208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.486002 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.486070 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.486987 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.490240 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.493828 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.493884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.494005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.494074 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae950307-1857-4a46-ab98-55843387f128-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.494391 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae950307-1857-4a46-ab98-55843387f128-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.497777 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.499700 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.499700 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.500011 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.500140 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/63576ad5fa42431418a875a556f725540f55ae4f6468824ed68600c688720c80/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.500383 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.500451 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8ca26b84d347a31d255fc230498e1b3b968f4e7b0bb0d1644032f336cb0edaa8/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.501153 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.501346 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.501374 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c4c41f9e3b3f86aec75af0301350a0459ec825b041a9d3df4027e381e6ff6c22/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.502421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.505391 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq8kk\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-kube-api-access-fq8kk\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.507531 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dst7m\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-kube-api-access-dst7m\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.511739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zkdp\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-kube-api-access-5zkdp\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.573855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"rabbitmq-server-2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.576295 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.576432 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"rabbitmq-server-1\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.580810 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.585993 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gqblj" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.586227 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.586388 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.586612 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.586753 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.586917 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.587061 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.594856 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"rabbitmq-server-0\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.601211 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" event={"ID":"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9","Type":"ContainerStarted","Data":"66069c7bd3a83c7db4b5bef485e40e2c00425d823817b75cefad550e20ac3d6b"} Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.605550 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.606920 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" event={"ID":"2aaf8c82-efe8-424c-8e61-e0c418980262","Type":"ContainerStarted","Data":"893fff7973d7f64792bea973d43f2aba1a36c30aac4a52b7102eaf2e8e01a1d3"} Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.668521 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.679440 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886bf823-6964-4a71-807d-2b448201fc5e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.679493 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.679523 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.679564 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.679605 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886bf823-6964-4a71-807d-2b448201fc5e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.679632 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxvwg\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-kube-api-access-xxvwg\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.679792 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.679999 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.680068 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.680104 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.680137 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.734785 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.748367 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782305 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782373 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886bf823-6964-4a71-807d-2b448201fc5e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782405 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxvwg\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-kube-api-access-xxvwg\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782433 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782471 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782521 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782555 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782586 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782674 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886bf823-6964-4a71-807d-2b448201fc5e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782699 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.782723 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.790791 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.791956 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.792458 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.792737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.793049 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.793731 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.795816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886bf823-6964-4a71-807d-2b448201fc5e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.797830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.805419 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886bf823-6964-4a71-807d-2b448201fc5e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:41 crc kubenswrapper[4792]: I0319 17:04:41.833310 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxvwg\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-kube-api-access-xxvwg\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.059971 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.060418 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76311b204d5b95e55d82801e47aa6cbf79945f7bf65419c3aa5650d333431014/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.206588 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"rabbitmq-cell1-server-0\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.247302 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.370338 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:04:42 crc kubenswrapper[4792]: W0319 17:04:42.379474 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae950307_1857_4a46_ab98_55843387f128.slice/crio-21c9922008dfa0267810397560ff1cadb9a18a683205b9fe39894866c0e924fc WatchSource:0}: Error finding container 21c9922008dfa0267810397560ff1cadb9a18a683205b9fe39894866c0e924fc: Status 404 returned error can't find the container with id 21c9922008dfa0267810397560ff1cadb9a18a683205b9fe39894866c0e924fc Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.623576 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae950307-1857-4a46-ab98-55843387f128","Type":"ContainerStarted","Data":"21c9922008dfa0267810397560ff1cadb9a18a683205b9fe39894866c0e924fc"} Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.656608 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.682971 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.768032 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.775483 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.781467 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.782410 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.784805 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-sgm27" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.785722 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.800490 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.803047 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.923699 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.923751 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.923885 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e62149c-3341-429a-9a5c-ac7f57a1d19f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e62149c-3341-429a-9a5c-ac7f57a1d19f\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.923913 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.923961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.923980 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.924005 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch6pj\" (UniqueName: \"kubernetes.io/projected/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-kube-api-access-ch6pj\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:42 crc kubenswrapper[4792]: I0319 17:04:42.924026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.025383 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.025446 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.025481 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch6pj\" (UniqueName: \"kubernetes.io/projected/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-kube-api-access-ch6pj\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.025511 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.025562 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.025588 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.025697 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e62149c-3341-429a-9a5c-ac7f57a1d19f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e62149c-3341-429a-9a5c-ac7f57a1d19f\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.025732 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.027360 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.028991 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.033984 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.034301 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.060859 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.062473 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.062506 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e62149c-3341-429a-9a5c-ac7f57a1d19f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e62149c-3341-429a-9a5c-ac7f57a1d19f\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fbc51676449fee6705cc49ba77c6a9955048a4d94d2c6cd2a05830e1de2d0c6c/globalmount\"" pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.064515 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch6pj\" (UniqueName: \"kubernetes.io/projected/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-kube-api-access-ch6pj\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.064992 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.120830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e62149c-3341-429a-9a5c-ac7f57a1d19f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e62149c-3341-429a-9a5c-ac7f57a1d19f\") pod \"openstack-galera-0\" (UID: \"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575\") " pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.140250 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.419347 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.771433 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"886bf823-6964-4a71-807d-2b448201fc5e","Type":"ContainerStarted","Data":"729951d1a403c1a25657d9fc18344d0287e0f41ca4e13ee6d216c099ffc93f66"} Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.771963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3daeb97c-0c99-4d2c-8d07-5b168bf010d9","Type":"ContainerStarted","Data":"ff55647277071a20ec2125b111d5df166abfd965af5819eefa5f08bc4bfc47ca"} Mar 19 17:04:43 crc kubenswrapper[4792]: I0319 17:04:43.771977 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2","Type":"ContainerStarted","Data":"999c30930a1dffe285667e1dc1777106cf7ce3530556817832f3ec9970b657d6"} Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.023725 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.066871 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.070795 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.080355 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dxj4n" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.081256 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.082087 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.084281 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.105293 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.172325 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.172397 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/74993dec-a63b-4856-913e-39ec56f88058-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.172507 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.172624 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4dbe0cfb-ac00-4565-a7f3-eb5784e68c45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbe0cfb-ac00-4565-a7f3-eb5784e68c45\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.172718 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74993dec-a63b-4856-913e-39ec56f88058-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.173070 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.173158 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwwn\" (UniqueName: \"kubernetes.io/projected/74993dec-a63b-4856-913e-39ec56f88058-kube-api-access-tlwwn\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.173478 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/74993dec-a63b-4856-913e-39ec56f88058-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.280185 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.280327 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4dbe0cfb-ac00-4565-a7f3-eb5784e68c45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbe0cfb-ac00-4565-a7f3-eb5784e68c45\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.280384 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74993dec-a63b-4856-913e-39ec56f88058-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.280501 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.280533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwwn\" (UniqueName: \"kubernetes.io/projected/74993dec-a63b-4856-913e-39ec56f88058-kube-api-access-tlwwn\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.280584 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/74993dec-a63b-4856-913e-39ec56f88058-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.280629 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.280664 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/74993dec-a63b-4856-913e-39ec56f88058-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.282719 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/74993dec-a63b-4856-913e-39ec56f88058-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.289394 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.291672 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.295165 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74993dec-a63b-4856-913e-39ec56f88058-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.300820 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74993dec-a63b-4856-913e-39ec56f88058-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.310215 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.310280 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4dbe0cfb-ac00-4565-a7f3-eb5784e68c45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbe0cfb-ac00-4565-a7f3-eb5784e68c45\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/29f368e638390892bacd2a3ea4b204423f44aad63a658f4070871f27d28fe5c4/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.314508 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwwn\" (UniqueName: \"kubernetes.io/projected/74993dec-a63b-4856-913e-39ec56f88058-kube-api-access-tlwwn\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.338684 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.347173 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.353052 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/74993dec-a63b-4856-913e-39ec56f88058-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.353156 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-c4npd" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.353530 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.353259 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.381607 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.382922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f277fa1-4306-4605-b619-ab8b8df16ae5-kolla-config\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.382966 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f277fa1-4306-4605-b619-ab8b8df16ae5-config-data\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.383460 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f277fa1-4306-4605-b619-ab8b8df16ae5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.383549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6phx\" (UniqueName: \"kubernetes.io/projected/7f277fa1-4306-4605-b619-ab8b8df16ae5-kube-api-access-t6phx\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.383696 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f277fa1-4306-4605-b619-ab8b8df16ae5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.399429 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4dbe0cfb-ac00-4565-a7f3-eb5784e68c45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbe0cfb-ac00-4565-a7f3-eb5784e68c45\") pod \"openstack-cell1-galera-0\" (UID: \"74993dec-a63b-4856-913e-39ec56f88058\") " pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.437514 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.484983 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f277fa1-4306-4605-b619-ab8b8df16ae5-kolla-config\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.485280 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f277fa1-4306-4605-b619-ab8b8df16ae5-config-data\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.485385 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f277fa1-4306-4605-b619-ab8b8df16ae5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.485414 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6phx\" (UniqueName: \"kubernetes.io/projected/7f277fa1-4306-4605-b619-ab8b8df16ae5-kube-api-access-t6phx\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.485458 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f277fa1-4306-4605-b619-ab8b8df16ae5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.486410 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f277fa1-4306-4605-b619-ab8b8df16ae5-kolla-config\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.487531 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f277fa1-4306-4605-b619-ab8b8df16ae5-config-data\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.491433 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f277fa1-4306-4605-b619-ab8b8df16ae5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.508229 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6phx\" (UniqueName: \"kubernetes.io/projected/7f277fa1-4306-4605-b619-ab8b8df16ae5-kube-api-access-t6phx\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.514585 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f277fa1-4306-4605-b619-ab8b8df16ae5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7f277fa1-4306-4605-b619-ab8b8df16ae5\") " pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.696397 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 19 17:04:44 crc kubenswrapper[4792]: I0319 17:04:44.801609 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575","Type":"ContainerStarted","Data":"d942bca0498a50a0601d7d2f6f7356371f961bd8b30dfd3de5aa277665b7362e"} Mar 19 17:04:46 crc kubenswrapper[4792]: I0319 17:04:46.841635 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:04:46 crc kubenswrapper[4792]: I0319 17:04:46.844014 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 19 17:04:46 crc kubenswrapper[4792]: I0319 17:04:46.845805 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-69584" Mar 19 17:04:46 crc kubenswrapper[4792]: I0319 17:04:46.866029 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:04:46 crc kubenswrapper[4792]: I0319 17:04:46.966799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r66vp\" (UniqueName: \"kubernetes.io/projected/a72bb0db-ba96-464e-84be-283010baf52c-kube-api-access-r66vp\") pod \"kube-state-metrics-0\" (UID: \"a72bb0db-ba96-464e-84be-283010baf52c\") " pod="openstack/kube-state-metrics-0" Mar 19 17:04:47 crc kubenswrapper[4792]: I0319 17:04:47.075634 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r66vp\" (UniqueName: \"kubernetes.io/projected/a72bb0db-ba96-464e-84be-283010baf52c-kube-api-access-r66vp\") pod \"kube-state-metrics-0\" (UID: \"a72bb0db-ba96-464e-84be-283010baf52c\") " pod="openstack/kube-state-metrics-0" Mar 19 17:04:47 crc kubenswrapper[4792]: I0319 17:04:47.119161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r66vp\" (UniqueName: \"kubernetes.io/projected/a72bb0db-ba96-464e-84be-283010baf52c-kube-api-access-r66vp\") pod \"kube-state-metrics-0\" (UID: \"a72bb0db-ba96-464e-84be-283010baf52c\") " pod="openstack/kube-state-metrics-0" Mar 19 17:04:47 crc kubenswrapper[4792]: I0319 17:04:47.177511 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 19 17:04:47 crc kubenswrapper[4792]: I0319 17:04:47.947509 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6"] Mar 19 17:04:47 crc kubenswrapper[4792]: I0319 17:04:47.965781 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:47 crc kubenswrapper[4792]: I0319 17:04:47.973350 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Mar 19 17:04:47 crc kubenswrapper[4792]: I0319 17:04:47.976882 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-blqz6" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.011144 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn2q8\" (UniqueName: \"kubernetes.io/projected/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-kube-api-access-jn2q8\") pod \"observability-ui-dashboards-7f87b9b85b-49sm6\" (UID: \"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.011212 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-serving-cert\") pod \"observability-ui-dashboards-7f87b9b85b-49sm6\" (UID: \"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.075431 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6"] Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.113738 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-serving-cert\") pod \"observability-ui-dashboards-7f87b9b85b-49sm6\" (UID: \"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.113944 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn2q8\" (UniqueName: \"kubernetes.io/projected/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-kube-api-access-jn2q8\") pod \"observability-ui-dashboards-7f87b9b85b-49sm6\" (UID: \"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:48 crc kubenswrapper[4792]: E0319 17:04:48.114281 4792 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Mar 19 17:04:48 crc kubenswrapper[4792]: E0319 17:04:48.114333 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-serving-cert podName:b958b34e-1fbb-4f66-bec7-130b5a0d2d9c nodeName:}" failed. No retries permitted until 2026-03-19 17:04:48.614317072 +0000 UTC m=+1451.760374612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-serving-cert") pod "observability-ui-dashboards-7f87b9b85b-49sm6" (UID: "b958b34e-1fbb-4f66-bec7-130b5a0d2d9c") : secret "observability-ui-dashboards" not found Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.198421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn2q8\" (UniqueName: \"kubernetes.io/projected/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-kube-api-access-jn2q8\") pod \"observability-ui-dashboards-7f87b9b85b-49sm6\" (UID: \"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.398994 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8656c6c5d8-kzwmx"] Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.400320 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.435780 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8656c6c5d8-kzwmx"] Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.505004 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.514319 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.516221 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.516555 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.516700 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.516789 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.518314 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.533110 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.536744 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.543008 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lh4q9" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.545054 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9100a499-798c-4e58-815d-030f63f25740-console-oauth-config\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.545100 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9100a499-798c-4e58-815d-030f63f25740-console-serving-cert\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.545171 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-trusted-ca-bundle\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.545261 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-service-ca\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.545337 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-oauth-serving-cert\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.545375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4bd5\" (UniqueName: \"kubernetes.io/projected/9100a499-798c-4e58-815d-030f63f25740-kube-api-access-w4bd5\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.545399 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-console-config\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.557353 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.648428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.648479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-service-ca\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.648522 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lw97\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-kube-api-access-7lw97\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.648542 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.648736 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.648816 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-oauth-serving-cert\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649405 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4bd5\" (UniqueName: \"kubernetes.io/projected/9100a499-798c-4e58-815d-030f63f25740-kube-api-access-w4bd5\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649448 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649506 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-console-config\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649578 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649630 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649685 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-serving-cert\") pod \"observability-ui-dashboards-7f87b9b85b-49sm6\" (UID: \"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649739 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649766 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9100a499-798c-4e58-815d-030f63f25740-console-oauth-config\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649811 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649886 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9100a499-798c-4e58-815d-030f63f25740-console-serving-cert\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649925 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.649980 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-trusted-ca-bundle\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.650058 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-service-ca\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.650163 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-oauth-serving-cert\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.650748 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-console-config\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.652757 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9100a499-798c-4e58-815d-030f63f25740-trusted-ca-bundle\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.656816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9100a499-798c-4e58-815d-030f63f25740-console-serving-cert\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.659699 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9100a499-798c-4e58-815d-030f63f25740-console-oauth-config\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.667071 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4bd5\" (UniqueName: \"kubernetes.io/projected/9100a499-798c-4e58-815d-030f63f25740-kube-api-access-w4bd5\") pod \"console-8656c6c5d8-kzwmx\" (UID: \"9100a499-798c-4e58-815d-030f63f25740\") " pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.672931 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b958b34e-1fbb-4f66-bec7-130b5a0d2d9c-serving-cert\") pod \"observability-ui-dashboards-7f87b9b85b-49sm6\" (UID: \"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.734164 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751410 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lw97\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-kube-api-access-7lw97\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751507 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751596 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751652 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751713 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751823 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.751870 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.753786 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.754320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.756577 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.757879 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.758550 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.759353 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.762032 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.762069 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ff2a51149070d10d9416b66fcd1d1cee37f55591d06c1c8c492c45e8a7bf5698/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.767326 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.773161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lw97\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-kube-api-access-7lw97\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.773609 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.835271 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"prometheus-metric-storage-0\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.854664 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 19 17:04:48 crc kubenswrapper[4792]: I0319 17:04:48.947066 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.179980 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hkjvd"] Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.181942 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.187907 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.188196 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.188354 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-r5qwk" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.215954 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hkjvd"] Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.228736 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-56rd9"] Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.230826 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.267663 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-56rd9"] Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.302919 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzqqb\" (UniqueName: \"kubernetes.io/projected/bf820855-761d-475e-b080-1bf46ddddfd3-kube-api-access-gzqqb\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.302985 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf820855-761d-475e-b080-1bf46ddddfd3-scripts\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303049 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-log-ovn\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303070 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-run\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303091 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-etc-ovs\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-run\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303286 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea5748-5aed-4ae4-a590-94a23170b160-ovn-controller-tls-certs\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303326 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-run-ovn\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303353 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6jwh\" (UniqueName: \"kubernetes.io/projected/55ea5748-5aed-4ae4-a590-94a23170b160-kube-api-access-w6jwh\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303477 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-log\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.303857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ea5748-5aed-4ae4-a590-94a23170b160-combined-ca-bundle\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.304007 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55ea5748-5aed-4ae4-a590-94a23170b160-scripts\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.304051 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-lib\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406441 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-log-ovn\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406487 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-run\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406510 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-etc-ovs\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406527 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-run\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406573 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea5748-5aed-4ae4-a590-94a23170b160-ovn-controller-tls-certs\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406588 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-run-ovn\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406614 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6jwh\" (UniqueName: \"kubernetes.io/projected/55ea5748-5aed-4ae4-a590-94a23170b160-kube-api-access-w6jwh\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406642 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-log\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406687 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ea5748-5aed-4ae4-a590-94a23170b160-combined-ca-bundle\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406730 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55ea5748-5aed-4ae4-a590-94a23170b160-scripts\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406750 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-lib\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406789 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzqqb\" (UniqueName: \"kubernetes.io/projected/bf820855-761d-475e-b080-1bf46ddddfd3-kube-api-access-gzqqb\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.406822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf820855-761d-475e-b080-1bf46ddddfd3-scripts\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.407143 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-log-ovn\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.407213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-run\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.407234 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-etc-ovs\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.407265 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-log\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.407315 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-run\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.407393 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/55ea5748-5aed-4ae4-a590-94a23170b160-var-run-ovn\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.407517 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bf820855-761d-475e-b080-1bf46ddddfd3-var-lib\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.409972 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55ea5748-5aed-4ae4-a590-94a23170b160-scripts\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.410219 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf820855-761d-475e-b080-1bf46ddddfd3-scripts\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.417741 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ea5748-5aed-4ae4-a590-94a23170b160-ovn-controller-tls-certs\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.421499 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ea5748-5aed-4ae4-a590-94a23170b160-combined-ca-bundle\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.424424 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzqqb\" (UniqueName: \"kubernetes.io/projected/bf820855-761d-475e-b080-1bf46ddddfd3-kube-api-access-gzqqb\") pod \"ovn-controller-ovs-56rd9\" (UID: \"bf820855-761d-475e-b080-1bf46ddddfd3\") " pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.428420 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6jwh\" (UniqueName: \"kubernetes.io/projected/55ea5748-5aed-4ae4-a590-94a23170b160-kube-api-access-w6jwh\") pod \"ovn-controller-hkjvd\" (UID: \"55ea5748-5aed-4ae4-a590-94a23170b160\") " pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.521152 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hkjvd" Mar 19 17:04:50 crc kubenswrapper[4792]: I0319 17:04:50.569367 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.116265 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.119069 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.123399 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.123486 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.123544 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.123586 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wftkm" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.123621 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.143757 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.244942 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qwsd\" (UniqueName: \"kubernetes.io/projected/ce9f56e3-2b21-4854-ada6-3c81b790ccab-kube-api-access-2qwsd\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.245388 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce9f56e3-2b21-4854-ada6-3c81b790ccab-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.245422 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9f56e3-2b21-4854-ada6-3c81b790ccab-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.245495 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.245558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a37ba024-4fae-477a-b7fd-78d8dd34e6e3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a37ba024-4fae-477a-b7fd-78d8dd34e6e3\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.245595 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.245636 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f56e3-2b21-4854-ada6-3c81b790ccab-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.245673 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.347307 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.347388 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a37ba024-4fae-477a-b7fd-78d8dd34e6e3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a37ba024-4fae-477a-b7fd-78d8dd34e6e3\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.347424 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.347454 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f56e3-2b21-4854-ada6-3c81b790ccab-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.347474 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.347543 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qwsd\" (UniqueName: \"kubernetes.io/projected/ce9f56e3-2b21-4854-ada6-3c81b790ccab-kube-api-access-2qwsd\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.347617 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce9f56e3-2b21-4854-ada6-3c81b790ccab-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.347644 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9f56e3-2b21-4854-ada6-3c81b790ccab-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.348232 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce9f56e3-2b21-4854-ada6-3c81b790ccab-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.348662 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9f56e3-2b21-4854-ada6-3c81b790ccab-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.349089 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f56e3-2b21-4854-ada6-3c81b790ccab-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.350204 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.350247 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a37ba024-4fae-477a-b7fd-78d8dd34e6e3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a37ba024-4fae-477a-b7fd-78d8dd34e6e3\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3f97249e0daba12a3aeb32b2f0af1a15f71635ad292d7231ab690fb58a06b882/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.355341 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.355482 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.355556 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9f56e3-2b21-4854-ada6-3c81b790ccab-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.365479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qwsd\" (UniqueName: \"kubernetes.io/projected/ce9f56e3-2b21-4854-ada6-3c81b790ccab-kube-api-access-2qwsd\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.394689 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a37ba024-4fae-477a-b7fd-78d8dd34e6e3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a37ba024-4fae-477a-b7fd-78d8dd34e6e3\") pod \"ovsdbserver-nb-0\" (UID: \"ce9f56e3-2b21-4854-ada6-3c81b790ccab\") " pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:52 crc kubenswrapper[4792]: I0319 17:04:52.454089 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.077724 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.079649 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.082026 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.082072 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bwrvs" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.082424 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.084942 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.090430 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.189558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.189725 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p98w\" (UniqueName: \"kubernetes.io/projected/2f154134-be00-48ab-a2b9-28cce44cc28a-kube-api-access-6p98w\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.189784 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f154134-be00-48ab-a2b9-28cce44cc28a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.189861 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f154134-be00-48ab-a2b9-28cce44cc28a-config\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.190058 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f154134-be00-48ab-a2b9-28cce44cc28a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.190276 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.190549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3faf46ad-d355-4bf1-a8d2-70fea53d4bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3faf46ad-d355-4bf1-a8d2-70fea53d4bcd\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.190678 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295016 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f154134-be00-48ab-a2b9-28cce44cc28a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295075 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295149 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3faf46ad-d355-4bf1-a8d2-70fea53d4bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3faf46ad-d355-4bf1-a8d2-70fea53d4bcd\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295193 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295376 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p98w\" (UniqueName: \"kubernetes.io/projected/2f154134-be00-48ab-a2b9-28cce44cc28a-kube-api-access-6p98w\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295406 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f154134-be00-48ab-a2b9-28cce44cc28a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295433 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f154134-be00-48ab-a2b9-28cce44cc28a-config\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.295542 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f154134-be00-48ab-a2b9-28cce44cc28a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.298224 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f154134-be00-48ab-a2b9-28cce44cc28a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.300289 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.300314 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.301024 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f154134-be00-48ab-a2b9-28cce44cc28a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.306052 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.306082 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3faf46ad-d355-4bf1-a8d2-70fea53d4bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3faf46ad-d355-4bf1-a8d2-70fea53d4bcd\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a4464a143c3091c0c4219cf1287fb8b8a5e82b20f2ae2e336b8024b473f4a6f6/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.306773 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f154134-be00-48ab-a2b9-28cce44cc28a-config\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.310963 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p98w\" (UniqueName: \"kubernetes.io/projected/2f154134-be00-48ab-a2b9-28cce44cc28a-kube-api-access-6p98w\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.339663 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3faf46ad-d355-4bf1-a8d2-70fea53d4bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3faf46ad-d355-4bf1-a8d2-70fea53d4bcd\") pod \"ovsdbserver-sb-0\" (UID: \"2f154134-be00-48ab-a2b9-28cce44cc28a\") " pod="openstack/ovsdbserver-sb-0" Mar 19 17:04:54 crc kubenswrapper[4792]: I0319 17:04:54.405195 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 19 17:05:00 crc kubenswrapper[4792]: I0319 17:05:00.076130 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.684791 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.685015 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zkdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(3daeb97c-0c99-4d2c-8d07-5b168bf010d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.686269 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.695111 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.695376 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fq8kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(ae950307-1857-4a46-ab98-55843387f128): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.696654 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="ae950307-1857-4a46-ab98-55843387f128" Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.706570 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.706717 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxvwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(886bf823-6964-4a71-807d-2b448201fc5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:00 crc kubenswrapper[4792]: E0319 17:05:00.707955 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="886bf823-6964-4a71-807d-2b448201fc5e" Mar 19 17:05:01 crc kubenswrapper[4792]: E0319 17:05:01.042261 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" Mar 19 17:05:01 crc kubenswrapper[4792]: E0319 17:05:01.042543 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="886bf823-6964-4a71-807d-2b448201fc5e" Mar 19 17:05:01 crc kubenswrapper[4792]: E0319 17:05:01.042568 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="ae950307-1857-4a46-ab98-55843387f128" Mar 19 17:05:07 crc kubenswrapper[4792]: E0319 17:05:07.411088 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Mar 19 17:05:07 crc kubenswrapper[4792]: E0319 17:05:07.411740 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch6pj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:07 crc kubenswrapper[4792]: E0319 17:05:07.412959 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" Mar 19 17:05:07 crc kubenswrapper[4792]: E0319 17:05:07.420367 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 19 17:05:07 crc kubenswrapper[4792]: E0319 17:05:07.420622 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dst7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(8d58d025-e325-4ac1-8bf8-b251ea8ed3f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:07 crc kubenswrapper[4792]: E0319 17:05:07.422110 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" Mar 19 17:05:08 crc kubenswrapper[4792]: I0319 17:05:08.100447 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"74993dec-a63b-4856-913e-39ec56f88058","Type":"ContainerStarted","Data":"614a561975be2625c0f09c29efe1f585c04e5b2f9dc502dc56c5e9be9a9c1267"} Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.102313 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.102507 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.194458 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.194891 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xc2bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-dvvsx_openstack(9f97023d-3238-45c3-9409-ecb03a63f844): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.196022 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" podUID="9f97023d-3238-45c3-9409-ecb03a63f844" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.203089 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.203334 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fffk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-56ppc_openstack(fcacd38a-1e90-49d1-8327-cc0c143b1e24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.206668 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" podUID="fcacd38a-1e90-49d1-8327-cc0c143b1e24" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.326616 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.326754 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhvdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-qdtgt_openstack(1a03e279-0d5d-4baf-99e2-dd1c3ba441a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.332184 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" podUID="1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.335483 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.335638 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh4m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-9sjzh_openstack(2aaf8c82-efe8-424c-8e61-e0c418980262): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:08 crc kubenswrapper[4792]: E0319 17:05:08.336735 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" podUID="2aaf8c82-efe8-424c-8e61-e0c418980262" Mar 19 17:05:08 crc kubenswrapper[4792]: I0319 17:05:08.856178 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 19 17:05:08 crc kubenswrapper[4792]: W0319 17:05:08.861679 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f277fa1_4306_4605_b619_ab8b8df16ae5.slice/crio-ed24c12d4afa611c56da2715b250b02deaace8a9a7fee62c12f373742270104a WatchSource:0}: Error finding container ed24c12d4afa611c56da2715b250b02deaace8a9a7fee62c12f373742270104a: Status 404 returned error can't find the container with id ed24c12d4afa611c56da2715b250b02deaace8a9a7fee62c12f373742270104a Mar 19 17:05:08 crc kubenswrapper[4792]: I0319 17:05:08.945787 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:05:08 crc kubenswrapper[4792]: W0319 17:05:08.946966 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f4ce965_a3ed_4d9f_918f_95ff40840ca5.slice/crio-135eef3b724949524d84de89ee2c44a5c1e778d1caa401e5148e8d966ac96e0d WatchSource:0}: Error finding container 135eef3b724949524d84de89ee2c44a5c1e778d1caa401e5148e8d966ac96e0d: Status 404 returned error can't find the container with id 135eef3b724949524d84de89ee2c44a5c1e778d1caa401e5148e8d966ac96e0d Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.108489 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7f277fa1-4306-4605-b619-ab8b8df16ae5","Type":"ContainerStarted","Data":"ed24c12d4afa611c56da2715b250b02deaace8a9a7fee62c12f373742270104a"} Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.109453 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerStarted","Data":"135eef3b724949524d84de89ee2c44a5c1e778d1caa401e5148e8d966ac96e0d"} Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.110965 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"74993dec-a63b-4856-913e-39ec56f88058","Type":"ContainerStarted","Data":"b1b9496da2dc310e632db16a6bd40569150c7375418cd2f39e7a1bce67dbebef"} Mar 19 17:05:09 crc kubenswrapper[4792]: E0319 17:05:09.115044 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" podUID="1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" Mar 19 17:05:09 crc kubenswrapper[4792]: E0319 17:05:09.115330 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" podUID="2aaf8c82-efe8-424c-8e61-e0c418980262" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.582083 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.678961 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f97023d-3238-45c3-9409-ecb03a63f844-config\") pod \"9f97023d-3238-45c3-9409-ecb03a63f844\" (UID: \"9f97023d-3238-45c3-9409-ecb03a63f844\") " Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.679208 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc2bm\" (UniqueName: \"kubernetes.io/projected/9f97023d-3238-45c3-9409-ecb03a63f844-kube-api-access-xc2bm\") pod \"9f97023d-3238-45c3-9409-ecb03a63f844\" (UID: \"9f97023d-3238-45c3-9409-ecb03a63f844\") " Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.680187 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f97023d-3238-45c3-9409-ecb03a63f844-config" (OuterVolumeSpecName: "config") pod "9f97023d-3238-45c3-9409-ecb03a63f844" (UID: "9f97023d-3238-45c3-9409-ecb03a63f844"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.681617 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f97023d-3238-45c3-9409-ecb03a63f844-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.687801 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f97023d-3238-45c3-9409-ecb03a63f844-kube-api-access-xc2bm" (OuterVolumeSpecName: "kube-api-access-xc2bm") pod "9f97023d-3238-45c3-9409-ecb03a63f844" (UID: "9f97023d-3238-45c3-9409-ecb03a63f844"). InnerVolumeSpecName "kube-api-access-xc2bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.724048 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6"] Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.761937 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hkjvd"] Mar 19 17:05:09 crc kubenswrapper[4792]: W0319 17:05:09.762102 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9100a499_798c_4e58_815d_030f63f25740.slice/crio-e77a855370fceebd494720b48708df650092651cfd10c7ba894f7f3768a5ae3c WatchSource:0}: Error finding container e77a855370fceebd494720b48708df650092651cfd10c7ba894f7f3768a5ae3c: Status 404 returned error can't find the container with id e77a855370fceebd494720b48708df650092651cfd10c7ba894f7f3768a5ae3c Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.773455 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8656c6c5d8-kzwmx"] Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.782695 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.786220 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.787187 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc2bm\" (UniqueName: \"kubernetes.io/projected/9f97023d-3238-45c3-9409-ecb03a63f844-kube-api-access-xc2bm\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:09 crc kubenswrapper[4792]: W0319 17:05:09.809239 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda72bb0db_ba96_464e_84be_283010baf52c.slice/crio-baea52129fc06890008a67ba3f1e7c33c5a4f9dd8cc2c33de79bfd5dca032299 WatchSource:0}: Error finding container baea52129fc06890008a67ba3f1e7c33c5a4f9dd8cc2c33de79bfd5dca032299: Status 404 returned error can't find the container with id baea52129fc06890008a67ba3f1e7c33c5a4f9dd8cc2c33de79bfd5dca032299 Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.888766 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fffk\" (UniqueName: \"kubernetes.io/projected/fcacd38a-1e90-49d1-8327-cc0c143b1e24-kube-api-access-5fffk\") pod \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.889174 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-dns-svc\") pod \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.889571 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-config\") pod \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\" (UID: \"fcacd38a-1e90-49d1-8327-cc0c143b1e24\") " Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.889746 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fcacd38a-1e90-49d1-8327-cc0c143b1e24" (UID: "fcacd38a-1e90-49d1-8327-cc0c143b1e24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.890682 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-config" (OuterVolumeSpecName: "config") pod "fcacd38a-1e90-49d1-8327-cc0c143b1e24" (UID: "fcacd38a-1e90-49d1-8327-cc0c143b1e24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.891366 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.891383 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcacd38a-1e90-49d1-8327-cc0c143b1e24-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.895967 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcacd38a-1e90-49d1-8327-cc0c143b1e24-kube-api-access-5fffk" (OuterVolumeSpecName: "kube-api-access-5fffk") pod "fcacd38a-1e90-49d1-8327-cc0c143b1e24" (UID: "fcacd38a-1e90-49d1-8327-cc0c143b1e24"). InnerVolumeSpecName "kube-api-access-5fffk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:09 crc kubenswrapper[4792]: I0319 17:05:09.993358 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fffk\" (UniqueName: \"kubernetes.io/projected/fcacd38a-1e90-49d1-8327-cc0c143b1e24-kube-api-access-5fffk\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.145431 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" event={"ID":"9f97023d-3238-45c3-9409-ecb03a63f844","Type":"ContainerDied","Data":"38daeb2f7a97a12120f0102b222b8229d883cc856daa48a3ef85ec292d47e0bd"} Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.145555 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dvvsx" Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.158854 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a72bb0db-ba96-464e-84be-283010baf52c","Type":"ContainerStarted","Data":"baea52129fc06890008a67ba3f1e7c33c5a4f9dd8cc2c33de79bfd5dca032299"} Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.170342 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8656c6c5d8-kzwmx" event={"ID":"9100a499-798c-4e58-815d-030f63f25740","Type":"ContainerStarted","Data":"b8b804b03f8e523b9964d153add61e4bd3e6aebf7e68a8371911ed626a0508ed"} Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.170386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8656c6c5d8-kzwmx" event={"ID":"9100a499-798c-4e58-815d-030f63f25740","Type":"ContainerStarted","Data":"e77a855370fceebd494720b48708df650092651cfd10c7ba894f7f3768a5ae3c"} Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.181089 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" event={"ID":"fcacd38a-1e90-49d1-8327-cc0c143b1e24","Type":"ContainerDied","Data":"22f21c60b6ac0730b60df583b22663548c5969445169df53f1b9fb0653ef255b"} Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.181190 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-56ppc" Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.186694 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hkjvd" event={"ID":"55ea5748-5aed-4ae4-a590-94a23170b160","Type":"ContainerStarted","Data":"959c31f5ea8fcdea64f1dd4bfd0d385f0bb6f5c53a7ff39a29c33038fd02c9d8"} Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.188487 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" event={"ID":"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c","Type":"ContainerStarted","Data":"2dcc1b53120fadb9954d1646fb6a553306d0ce1053ae5a1073ff67122bfe260b"} Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.202756 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvvsx"] Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.294211 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8656c6c5d8-kzwmx" podStartSLOduration=22.294188898 podStartE2EDuration="22.294188898s" podCreationTimestamp="2026-03-19 17:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:10.237957207 +0000 UTC m=+1473.384014747" watchObservedRunningTime="2026-03-19 17:05:10.294188898 +0000 UTC m=+1473.440246448" Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.297422 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvvsx"] Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.313919 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 17:05:10 crc kubenswrapper[4792]: W0319 17:05:10.324529 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f154134_be00_48ab_a2b9_28cce44cc28a.slice/crio-e3ff8574b0e743f85ebb7baf173192be6fd2c0c59c6fc20b530e9c50a13894ef WatchSource:0}: Error finding container e3ff8574b0e743f85ebb7baf173192be6fd2c0c59c6fc20b530e9c50a13894ef: Status 404 returned error can't find the container with id e3ff8574b0e743f85ebb7baf173192be6fd2c0c59c6fc20b530e9c50a13894ef Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.335566 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56ppc"] Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.348076 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-56ppc"] Mar 19 17:05:10 crc kubenswrapper[4792]: I0319 17:05:10.870717 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-56rd9"] Mar 19 17:05:11 crc kubenswrapper[4792]: I0319 17:05:11.200352 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2f154134-be00-48ab-a2b9-28cce44cc28a","Type":"ContainerStarted","Data":"e3ff8574b0e743f85ebb7baf173192be6fd2c0c59c6fc20b530e9c50a13894ef"} Mar 19 17:05:11 crc kubenswrapper[4792]: I0319 17:05:11.256022 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 17:05:11 crc kubenswrapper[4792]: W0319 17:05:11.491705 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce9f56e3_2b21_4854_ada6_3c81b790ccab.slice/crio-ad87f4c7555586a4a00805dc8dc4ef5b49f3458f8e03b44427c43d1550abbf89 WatchSource:0}: Error finding container ad87f4c7555586a4a00805dc8dc4ef5b49f3458f8e03b44427c43d1550abbf89: Status 404 returned error can't find the container with id ad87f4c7555586a4a00805dc8dc4ef5b49f3458f8e03b44427c43d1550abbf89 Mar 19 17:05:11 crc kubenswrapper[4792]: W0319 17:05:11.494181 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf820855_761d_475e_b080_1bf46ddddfd3.slice/crio-8413e36d9eb48ca31fa6906f38423659ea816778e6d2c11df60580cd8296d515 WatchSource:0}: Error finding container 8413e36d9eb48ca31fa6906f38423659ea816778e6d2c11df60580cd8296d515: Status 404 returned error can't find the container with id 8413e36d9eb48ca31fa6906f38423659ea816778e6d2c11df60580cd8296d515 Mar 19 17:05:11 crc kubenswrapper[4792]: I0319 17:05:11.754717 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f97023d-3238-45c3-9409-ecb03a63f844" path="/var/lib/kubelet/pods/9f97023d-3238-45c3-9409-ecb03a63f844/volumes" Mar 19 17:05:11 crc kubenswrapper[4792]: I0319 17:05:11.755171 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcacd38a-1e90-49d1-8327-cc0c143b1e24" path="/var/lib/kubelet/pods/fcacd38a-1e90-49d1-8327-cc0c143b1e24/volumes" Mar 19 17:05:12 crc kubenswrapper[4792]: I0319 17:05:12.210287 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce9f56e3-2b21-4854-ada6-3c81b790ccab","Type":"ContainerStarted","Data":"ad87f4c7555586a4a00805dc8dc4ef5b49f3458f8e03b44427c43d1550abbf89"} Mar 19 17:05:12 crc kubenswrapper[4792]: I0319 17:05:12.212390 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56rd9" event={"ID":"bf820855-761d-475e-b080-1bf46ddddfd3","Type":"ContainerStarted","Data":"8413e36d9eb48ca31fa6906f38423659ea816778e6d2c11df60580cd8296d515"} Mar 19 17:05:13 crc kubenswrapper[4792]: I0319 17:05:13.223972 4792 generic.go:334] "Generic (PLEG): container finished" podID="74993dec-a63b-4856-913e-39ec56f88058" containerID="b1b9496da2dc310e632db16a6bd40569150c7375418cd2f39e7a1bce67dbebef" exitCode=0 Mar 19 17:05:13 crc kubenswrapper[4792]: I0319 17:05:13.224046 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"74993dec-a63b-4856-913e-39ec56f88058","Type":"ContainerDied","Data":"b1b9496da2dc310e632db16a6bd40569150c7375418cd2f39e7a1bce67dbebef"} Mar 19 17:05:14 crc kubenswrapper[4792]: I0319 17:05:14.274520 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"74993dec-a63b-4856-913e-39ec56f88058","Type":"ContainerStarted","Data":"f256c98fb2d8568bfe54d6a492050c3f0b90acc15b08924c19543f88117420f9"} Mar 19 17:05:14 crc kubenswrapper[4792]: I0319 17:05:14.293993 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7f277fa1-4306-4605-b619-ab8b8df16ae5","Type":"ContainerStarted","Data":"3501fa5caeefddc8bbcca7c8d232efa9be870dfbe9cf0fed552620cd28418824"} Mar 19 17:05:14 crc kubenswrapper[4792]: I0319 17:05:14.294139 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 19 17:05:14 crc kubenswrapper[4792]: I0319 17:05:14.317616 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.284948951 podStartE2EDuration="31.317596677s" podCreationTimestamp="2026-03-19 17:04:43 +0000 UTC" firstStartedPulling="2026-03-19 17:05:07.41422115 +0000 UTC m=+1470.560278690" lastFinishedPulling="2026-03-19 17:05:08.446868876 +0000 UTC m=+1471.592926416" observedRunningTime="2026-03-19 17:05:14.307379098 +0000 UTC m=+1477.453436638" watchObservedRunningTime="2026-03-19 17:05:14.317596677 +0000 UTC m=+1477.463654217" Mar 19 17:05:14 crc kubenswrapper[4792]: I0319 17:05:14.331675 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.732209544 podStartE2EDuration="30.331653032s" podCreationTimestamp="2026-03-19 17:04:44 +0000 UTC" firstStartedPulling="2026-03-19 17:05:08.863374036 +0000 UTC m=+1472.009431576" lastFinishedPulling="2026-03-19 17:05:13.462817514 +0000 UTC m=+1476.608875064" observedRunningTime="2026-03-19 17:05:14.329562775 +0000 UTC m=+1477.475620315" watchObservedRunningTime="2026-03-19 17:05:14.331653032 +0000 UTC m=+1477.477710572" Mar 19 17:05:14 crc kubenswrapper[4792]: I0319 17:05:14.438946 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 19 17:05:14 crc kubenswrapper[4792]: I0319 17:05:14.438997 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.320044 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce9f56e3-2b21-4854-ada6-3c81b790ccab","Type":"ContainerStarted","Data":"a1ec786d4d95cb99c6b9baaac9a8de818271790a648e1381117e766fdc555fb2"} Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.321792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hkjvd" event={"ID":"55ea5748-5aed-4ae4-a590-94a23170b160","Type":"ContainerStarted","Data":"da785aede72315132d41df4f0e0e2922eed7cffa47998c6e8bc173a4419bc2a4"} Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.322968 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hkjvd" Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.324406 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" event={"ID":"b958b34e-1fbb-4f66-bec7-130b5a0d2d9c","Type":"ContainerStarted","Data":"43873c8c63fc5385eff4fa87f28b6b426631cad1b0cb3be64d09b4224565ae1c"} Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.326164 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56rd9" event={"ID":"bf820855-761d-475e-b080-1bf46ddddfd3","Type":"ContainerStarted","Data":"bafe8f91e8fbab2e15f4d2450ac80737507629db59f4ea3f5674b3b3269298e0"} Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.328022 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a72bb0db-ba96-464e-84be-283010baf52c","Type":"ContainerStarted","Data":"aa5591dd145814d99d4e5532e8e11b5af69b7fd3bb1c2e38e4c9a0039ab20377"} Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.328077 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.329076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2f154134-be00-48ab-a2b9-28cce44cc28a","Type":"ContainerStarted","Data":"b37fc628c25062ad2a58bc2b7fdd5e93ab4b460792f5811906a30ab63f8a90a4"} Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.330120 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerStarted","Data":"231d6c9cabaeddc0645573fec9a25a446b0ee43c32924261524c79803290fc0d"} Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.339597 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hkjvd" podStartSLOduration=22.409227803 podStartE2EDuration="27.339582555s" podCreationTimestamp="2026-03-19 17:04:50 +0000 UTC" firstStartedPulling="2026-03-19 17:05:09.754376022 +0000 UTC m=+1472.900433562" lastFinishedPulling="2026-03-19 17:05:14.684730774 +0000 UTC m=+1477.830788314" observedRunningTime="2026-03-19 17:05:17.336882361 +0000 UTC m=+1480.482939901" watchObservedRunningTime="2026-03-19 17:05:17.339582555 +0000 UTC m=+1480.485640095" Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.404947 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.204951424 podStartE2EDuration="31.404930475s" podCreationTimestamp="2026-03-19 17:04:46 +0000 UTC" firstStartedPulling="2026-03-19 17:05:09.82548213 +0000 UTC m=+1472.971539660" lastFinishedPulling="2026-03-19 17:05:17.025461171 +0000 UTC m=+1480.171518711" observedRunningTime="2026-03-19 17:05:17.396186776 +0000 UTC m=+1480.542244316" watchObservedRunningTime="2026-03-19 17:05:17.404930475 +0000 UTC m=+1480.550988015" Mar 19 17:05:17 crc kubenswrapper[4792]: I0319 17:05:17.424492 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-49sm6" podStartSLOduration=24.545707719 podStartE2EDuration="30.42445727s" podCreationTimestamp="2026-03-19 17:04:47 +0000 UTC" firstStartedPulling="2026-03-19 17:05:09.744544742 +0000 UTC m=+1472.890602282" lastFinishedPulling="2026-03-19 17:05:15.623294293 +0000 UTC m=+1478.769351833" observedRunningTime="2026-03-19 17:05:17.417712935 +0000 UTC m=+1480.563770475" watchObservedRunningTime="2026-03-19 17:05:17.42445727 +0000 UTC m=+1480.570514810" Mar 19 17:05:18 crc kubenswrapper[4792]: I0319 17:05:18.387269 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3daeb97c-0c99-4d2c-8d07-5b168bf010d9","Type":"ContainerStarted","Data":"a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec"} Mar 19 17:05:18 crc kubenswrapper[4792]: I0319 17:05:18.402352 4792 generic.go:334] "Generic (PLEG): container finished" podID="bf820855-761d-475e-b080-1bf46ddddfd3" containerID="bafe8f91e8fbab2e15f4d2450ac80737507629db59f4ea3f5674b3b3269298e0" exitCode=0 Mar 19 17:05:18 crc kubenswrapper[4792]: I0319 17:05:18.404230 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56rd9" event={"ID":"bf820855-761d-475e-b080-1bf46ddddfd3","Type":"ContainerDied","Data":"bafe8f91e8fbab2e15f4d2450ac80737507629db59f4ea3f5674b3b3269298e0"} Mar 19 17:05:18 crc kubenswrapper[4792]: I0319 17:05:18.735979 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:05:18 crc kubenswrapper[4792]: I0319 17:05:18.736300 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:05:18 crc kubenswrapper[4792]: I0319 17:05:18.745648 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:05:19 crc kubenswrapper[4792]: I0319 17:05:19.414826 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"886bf823-6964-4a71-807d-2b448201fc5e","Type":"ContainerStarted","Data":"bc1b64f0e6128b699c99dc8dcb63e408b32a3cd1bb88f233cb5b2f619cde4569"} Mar 19 17:05:19 crc kubenswrapper[4792]: I0319 17:05:19.419335 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56rd9" event={"ID":"bf820855-761d-475e-b080-1bf46ddddfd3","Type":"ContainerStarted","Data":"bfef79f0f4a831d5f561097b1f75c745c4cd879445dcc9a32b54a689111753d5"} Mar 19 17:05:19 crc kubenswrapper[4792]: I0319 17:05:19.422528 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae950307-1857-4a46-ab98-55843387f128","Type":"ContainerStarted","Data":"7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e"} Mar 19 17:05:19 crc kubenswrapper[4792]: I0319 17:05:19.429557 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8656c6c5d8-kzwmx" Mar 19 17:05:19 crc kubenswrapper[4792]: I0319 17:05:19.494312 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5dc4d84b9d-c6zml"] Mar 19 17:05:19 crc kubenswrapper[4792]: I0319 17:05:19.698006 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 19 17:05:20 crc kubenswrapper[4792]: I0319 17:05:20.433146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56rd9" event={"ID":"bf820855-761d-475e-b080-1bf46ddddfd3","Type":"ContainerStarted","Data":"3e1be7714d8715dd917d92c5d6bfc128d3d8d40bcee76d3429098ea676fd8751"} Mar 19 17:05:20 crc kubenswrapper[4792]: I0319 17:05:20.433719 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:05:20 crc kubenswrapper[4792]: I0319 17:05:20.460171 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-56rd9" podStartSLOduration=26.33287358 podStartE2EDuration="30.460150344s" podCreationTimestamp="2026-03-19 17:04:50 +0000 UTC" firstStartedPulling="2026-03-19 17:05:11.495919366 +0000 UTC m=+1474.641976896" lastFinishedPulling="2026-03-19 17:05:15.62319612 +0000 UTC m=+1478.769253660" observedRunningTime="2026-03-19 17:05:20.452758522 +0000 UTC m=+1483.598816062" watchObservedRunningTime="2026-03-19 17:05:20.460150344 +0000 UTC m=+1483.606207884" Mar 19 17:05:20 crc kubenswrapper[4792]: I0319 17:05:20.569712 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:05:22 crc kubenswrapper[4792]: I0319 17:05:22.450774 4792 generic.go:334] "Generic (PLEG): container finished" podID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerID="231d6c9cabaeddc0645573fec9a25a446b0ee43c32924261524c79803290fc0d" exitCode=0 Mar 19 17:05:22 crc kubenswrapper[4792]: I0319 17:05:22.450869 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerDied","Data":"231d6c9cabaeddc0645573fec9a25a446b0ee43c32924261524c79803290fc0d"} Mar 19 17:05:23 crc kubenswrapper[4792]: I0319 17:05:23.464309 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2","Type":"ContainerStarted","Data":"eb4a7be4f50be7354e01d506a44dffdf85cda1ccc2a413dfca36b1e0196c8715"} Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.182831 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.470235 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qdtgt"] Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.518992 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-qsdc7"] Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.520735 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.528498 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-qsdc7"] Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.675277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.675336 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pljcj\" (UniqueName: \"kubernetes.io/projected/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-kube-api-access-pljcj\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.675402 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-config\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.780932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.792863 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pljcj\" (UniqueName: \"kubernetes.io/projected/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-kube-api-access-pljcj\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.793885 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.795169 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-config\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.795602 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-config\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.819431 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pljcj\" (UniqueName: \"kubernetes.io/projected/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-kube-api-access-pljcj\") pod \"dnsmasq-dns-7cb5889db5-qsdc7\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:27 crc kubenswrapper[4792]: I0319 17:05:27.849047 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:28 crc kubenswrapper[4792]: I0319 17:05:28.909370 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 19 17:05:28 crc kubenswrapper[4792]: I0319 17:05:28.916631 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 19 17:05:28 crc kubenswrapper[4792]: I0319 17:05:28.924734 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 19 17:05:28 crc kubenswrapper[4792]: I0319 17:05:28.924804 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fn2z4" Mar 19 17:05:28 crc kubenswrapper[4792]: I0319 17:05:28.924893 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 19 17:05:28 crc kubenswrapper[4792]: I0319 17:05:28.925996 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 19 17:05:28 crc kubenswrapper[4792]: I0319 17:05:28.933687 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.022865 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.022904 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/797388ae-9d68-43cc-9e1b-063da11e1a5a-lock\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.022963 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxp4\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-kube-api-access-8sxp4\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.023019 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/797388ae-9d68-43cc-9e1b-063da11e1a5a-cache\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.023060 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6569e09-d3f7-42f2-b4e4-a5856fd545c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6569e09-d3f7-42f2-b4e4-a5856fd545c1\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.023118 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797388ae-9d68-43cc-9e1b-063da11e1a5a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.125137 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sxp4\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-kube-api-access-8sxp4\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.125229 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/797388ae-9d68-43cc-9e1b-063da11e1a5a-cache\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.125286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b6569e09-d3f7-42f2-b4e4-a5856fd545c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6569e09-d3f7-42f2-b4e4-a5856fd545c1\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.125358 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797388ae-9d68-43cc-9e1b-063da11e1a5a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.125394 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.125419 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/797388ae-9d68-43cc-9e1b-063da11e1a5a-lock\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: E0319 17:05:29.125670 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 17:05:29 crc kubenswrapper[4792]: E0319 17:05:29.125695 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 17:05:29 crc kubenswrapper[4792]: E0319 17:05:29.125732 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift podName:797388ae-9d68-43cc-9e1b-063da11e1a5a nodeName:}" failed. No retries permitted until 2026-03-19 17:05:29.625719995 +0000 UTC m=+1492.771777535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift") pod "swift-storage-0" (UID: "797388ae-9d68-43cc-9e1b-063da11e1a5a") : configmap "swift-ring-files" not found Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.126000 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/797388ae-9d68-43cc-9e1b-063da11e1a5a-lock\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.126073 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/797388ae-9d68-43cc-9e1b-063da11e1a5a-cache\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.139702 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.139966 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b6569e09-d3f7-42f2-b4e4-a5856fd545c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6569e09-d3f7-42f2-b4e4-a5856fd545c1\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c10f31030f2604b4567691da6f473a57a1f858a760c793151cdbb3be1d4abca2/globalmount\"" pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.140324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797388ae-9d68-43cc-9e1b-063da11e1a5a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.144237 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sxp4\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-kube-api-access-8sxp4\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.184121 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b6569e09-d3f7-42f2-b4e4-a5856fd545c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6569e09-d3f7-42f2-b4e4-a5856fd545c1\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.634390 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:29 crc kubenswrapper[4792]: E0319 17:05:29.634561 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 17:05:29 crc kubenswrapper[4792]: E0319 17:05:29.634582 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 17:05:29 crc kubenswrapper[4792]: E0319 17:05:29.634627 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift podName:797388ae-9d68-43cc-9e1b-063da11e1a5a nodeName:}" failed. No retries permitted until 2026-03-19 17:05:30.634611885 +0000 UTC m=+1493.780669425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift") pod "swift-storage-0" (UID: "797388ae-9d68-43cc-9e1b-063da11e1a5a") : configmap "swift-ring-files" not found Mar 19 17:05:29 crc kubenswrapper[4792]: I0319 17:05:29.975540 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 19 17:05:30 crc kubenswrapper[4792]: E0319 17:05:30.313635 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Mar 19 17:05:30 crc kubenswrapper[4792]: E0319 17:05:30.314174 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n56hddh5b4h5c5h5d4h5fbh94h558h568h66h699h6chc9h56h57fh6dh57dh6fh575hddh5bdh5b8h569h5ddh55dh597h649h5b9h8fh577h9fh6fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6p98w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(2f154134-be00-48ab-a2b9-28cce44cc28a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:30 crc kubenswrapper[4792]: E0319 17:05:30.315346 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="2f154134-be00-48ab-a2b9-28cce44cc28a" Mar 19 17:05:30 crc kubenswrapper[4792]: I0319 17:05:30.361465 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output=< Mar 19 17:05:30 crc kubenswrapper[4792]: wsrep_local_state_comment (Joined) differs from Synced Mar 19 17:05:30 crc kubenswrapper[4792]: > Mar 19 17:05:30 crc kubenswrapper[4792]: I0319 17:05:30.482584 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-qsdc7"] Mar 19 17:05:30 crc kubenswrapper[4792]: I0319 17:05:30.550739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575","Type":"ContainerStarted","Data":"4f40aeb995baeef34d23e27eab06dd491d35e701838e17ef1531ced337351c6c"} Mar 19 17:05:30 crc kubenswrapper[4792]: I0319 17:05:30.552262 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" event={"ID":"ca036e7f-6937-42d4-8d72-6a4ebfb8b789","Type":"ContainerStarted","Data":"771e2506200fe4cea9e7ecf7daa114de59a5d3935dd77b8a5f90d14392c84378"} Mar 19 17:05:30 crc kubenswrapper[4792]: E0319 17:05:30.554744 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="2f154134-be00-48ab-a2b9-28cce44cc28a" Mar 19 17:05:30 crc kubenswrapper[4792]: I0319 17:05:30.672217 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:30 crc kubenswrapper[4792]: E0319 17:05:30.672442 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 17:05:30 crc kubenswrapper[4792]: E0319 17:05:30.673007 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 17:05:30 crc kubenswrapper[4792]: E0319 17:05:30.673067 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift podName:797388ae-9d68-43cc-9e1b-063da11e1a5a nodeName:}" failed. No retries permitted until 2026-03-19 17:05:32.673030871 +0000 UTC m=+1495.819088411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift") pod "swift-storage-0" (UID: "797388ae-9d68-43cc-9e1b-063da11e1a5a") : configmap "swift-ring-files" not found Mar 19 17:05:31 crc kubenswrapper[4792]: E0319 17:05:31.329162 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Mar 19 17:05:31 crc kubenswrapper[4792]: E0319 17:05:31.329384 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n66bh56fh9ch577h5dbh67fh577h579h8fh6ch689hb6h5d5hf5h66bh54fhb8h554h545h5b5h7fh55h5h5dh5dch58bhcbh5bch9fh557h57fh557q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qwsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(ce9f56e3-2b21-4854-ada6-3c81b790ccab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:05:31 crc kubenswrapper[4792]: E0319 17:05:31.330538 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="ce9f56e3-2b21-4854-ada6-3c81b790ccab" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.454698 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.538080 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-tppw6"] Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.540596 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.543667 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.543667 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.544596 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.547229 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tppw6"] Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.593068 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-tppw6"] Mar 19 17:05:32 crc kubenswrapper[4792]: E0319 17:05:32.593824 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-74w55 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-tppw6" podUID="0b5da4b2-61fc-4955-821f-ba90139db179" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.600442 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-44cgh"] Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.601813 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.645398 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-44cgh"] Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.655043 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-scripts\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.655083 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-swiftconf\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.655121 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-dispersionconf\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.655147 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74w55\" (UniqueName: \"kubernetes.io/projected/0b5da4b2-61fc-4955-821f-ba90139db179-kube-api-access-74w55\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.655170 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-combined-ca-bundle\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.655228 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0b5da4b2-61fc-4955-821f-ba90139db179-etc-swift\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.655637 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-ring-data-devices\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.757193 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-ring-data-devices\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.757248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-scripts\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.757313 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-ring-data-devices\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.757906 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46fc890d-ef4d-49ec-8f22-5200a9ec6167-etc-swift\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758033 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-scripts\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758083 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-combined-ca-bundle\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-swiftconf\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-dispersionconf\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758209 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74w55\" (UniqueName: \"kubernetes.io/projected/0b5da4b2-61fc-4955-821f-ba90139db179-kube-api-access-74w55\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758256 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-combined-ca-bundle\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758358 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758376 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-ring-data-devices\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758395 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0b5da4b2-61fc-4955-821f-ba90139db179-etc-swift\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758442 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-swiftconf\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758524 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp949\" (UniqueName: \"kubernetes.io/projected/46fc890d-ef4d-49ec-8f22-5200a9ec6167-kube-api-access-cp949\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758633 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-dispersionconf\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.758643 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-scripts\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.759067 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0b5da4b2-61fc-4955-821f-ba90139db179-etc-swift\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: E0319 17:05:32.759288 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 17:05:32 crc kubenswrapper[4792]: E0319 17:05:32.759321 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 17:05:32 crc kubenswrapper[4792]: E0319 17:05:32.759384 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift podName:797388ae-9d68-43cc-9e1b-063da11e1a5a nodeName:}" failed. No retries permitted until 2026-03-19 17:05:36.759364374 +0000 UTC m=+1499.905422024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift") pod "swift-storage-0" (UID: "797388ae-9d68-43cc-9e1b-063da11e1a5a") : configmap "swift-ring-files" not found Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.764343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-swiftconf\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.764372 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-combined-ca-bundle\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.772236 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-dispersionconf\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.776792 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74w55\" (UniqueName: \"kubernetes.io/projected/0b5da4b2-61fc-4955-821f-ba90139db179-kube-api-access-74w55\") pod \"swift-ring-rebalance-tppw6\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.861147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46fc890d-ef4d-49ec-8f22-5200a9ec6167-etc-swift\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.861548 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-combined-ca-bundle\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.861612 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46fc890d-ef4d-49ec-8f22-5200a9ec6167-etc-swift\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.861687 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-swiftconf\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.861731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp949\" (UniqueName: \"kubernetes.io/projected/46fc890d-ef4d-49ec-8f22-5200a9ec6167-kube-api-access-cp949\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.861804 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-dispersionconf\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.861904 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-ring-data-devices\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.861938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-scripts\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.862601 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-ring-data-devices\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.862772 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-scripts\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.864936 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-dispersionconf\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.865249 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-swiftconf\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.865321 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-combined-ca-bundle\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.880439 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp949\" (UniqueName: \"kubernetes.io/projected/46fc890d-ef4d-49ec-8f22-5200a9ec6167-kube-api-access-cp949\") pod \"swift-ring-rebalance-44cgh\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:32 crc kubenswrapper[4792]: I0319 17:05:32.918651 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.406382 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.457555 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.598666 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.616654 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.780231 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74w55\" (UniqueName: \"kubernetes.io/projected/0b5da4b2-61fc-4955-821f-ba90139db179-kube-api-access-74w55\") pod \"0b5da4b2-61fc-4955-821f-ba90139db179\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.780300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-ring-data-devices\") pod \"0b5da4b2-61fc-4955-821f-ba90139db179\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.780479 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-combined-ca-bundle\") pod \"0b5da4b2-61fc-4955-821f-ba90139db179\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.780540 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0b5da4b2-61fc-4955-821f-ba90139db179-etc-swift\") pod \"0b5da4b2-61fc-4955-821f-ba90139db179\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.780561 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-swiftconf\") pod \"0b5da4b2-61fc-4955-821f-ba90139db179\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.780584 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-scripts\") pod \"0b5da4b2-61fc-4955-821f-ba90139db179\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.780643 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-dispersionconf\") pod \"0b5da4b2-61fc-4955-821f-ba90139db179\" (UID: \"0b5da4b2-61fc-4955-821f-ba90139db179\") " Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.780947 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b5da4b2-61fc-4955-821f-ba90139db179-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0b5da4b2-61fc-4955-821f-ba90139db179" (UID: "0b5da4b2-61fc-4955-821f-ba90139db179"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.781076 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0b5da4b2-61fc-4955-821f-ba90139db179" (UID: "0b5da4b2-61fc-4955-821f-ba90139db179"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.781398 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-scripts" (OuterVolumeSpecName: "scripts") pod "0b5da4b2-61fc-4955-821f-ba90139db179" (UID: "0b5da4b2-61fc-4955-821f-ba90139db179"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.781632 4792 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.781651 4792 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0b5da4b2-61fc-4955-821f-ba90139db179-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.781662 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b5da4b2-61fc-4955-821f-ba90139db179-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.783654 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5da4b2-61fc-4955-821f-ba90139db179-kube-api-access-74w55" (OuterVolumeSpecName: "kube-api-access-74w55") pod "0b5da4b2-61fc-4955-821f-ba90139db179" (UID: "0b5da4b2-61fc-4955-821f-ba90139db179"). InnerVolumeSpecName "kube-api-access-74w55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.783828 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0b5da4b2-61fc-4955-821f-ba90139db179" (UID: "0b5da4b2-61fc-4955-821f-ba90139db179"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.784650 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0b5da4b2-61fc-4955-821f-ba90139db179" (UID: "0b5da4b2-61fc-4955-821f-ba90139db179"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.786179 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b5da4b2-61fc-4955-821f-ba90139db179" (UID: "0b5da4b2-61fc-4955-821f-ba90139db179"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.883188 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.883587 4792 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.883598 4792 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0b5da4b2-61fc-4955-821f-ba90139db179-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:33 crc kubenswrapper[4792]: I0319 17:05:33.883607 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74w55\" (UniqueName: \"kubernetes.io/projected/0b5da4b2-61fc-4955-821f-ba90139db179-kube-api-access-74w55\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:34 crc kubenswrapper[4792]: I0319 17:05:34.405971 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 19 17:05:34 crc kubenswrapper[4792]: I0319 17:05:34.449734 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 19 17:05:34 crc kubenswrapper[4792]: I0319 17:05:34.454721 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 19 17:05:34 crc kubenswrapper[4792]: I0319 17:05:34.502901 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 19 17:05:34 crc kubenswrapper[4792]: I0319 17:05:34.519494 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 19 17:05:34 crc kubenswrapper[4792]: I0319 17:05:34.607804 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tppw6" Mar 19 17:05:34 crc kubenswrapper[4792]: I0319 17:05:34.672901 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-tppw6"] Mar 19 17:05:34 crc kubenswrapper[4792]: I0319 17:05:34.681440 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-tppw6"] Mar 19 17:05:35 crc kubenswrapper[4792]: I0319 17:05:35.751850 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5da4b2-61fc-4955-821f-ba90139db179" path="/var/lib/kubelet/pods/0b5da4b2-61fc-4955-821f-ba90139db179/volumes" Mar 19 17:05:36 crc kubenswrapper[4792]: I0319 17:05:36.776152 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:36 crc kubenswrapper[4792]: E0319 17:05:36.776435 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 17:05:36 crc kubenswrapper[4792]: E0319 17:05:36.776475 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 17:05:36 crc kubenswrapper[4792]: E0319 17:05:36.776560 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift podName:797388ae-9d68-43cc-9e1b-063da11e1a5a nodeName:}" failed. No retries permitted until 2026-03-19 17:05:44.776532671 +0000 UTC m=+1507.922590231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift") pod "swift-storage-0" (UID: "797388ae-9d68-43cc-9e1b-063da11e1a5a") : configmap "swift-ring-files" not found Mar 19 17:05:37 crc kubenswrapper[4792]: I0319 17:05:37.507307 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 19 17:05:38 crc kubenswrapper[4792]: E0319 17:05:38.021324 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ce9f56e3-2b21-4854-ada6-3c81b790ccab" Mar 19 17:05:38 crc kubenswrapper[4792]: E0319 17:05:38.021324 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="2f154134-be00-48ab-a2b9-28cce44cc28a" Mar 19 17:05:38 crc kubenswrapper[4792]: E0319 17:05:38.645563 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ce9f56e3-2b21-4854-ada6-3c81b790ccab" Mar 19 17:05:38 crc kubenswrapper[4792]: E0319 17:05:38.647348 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="2f154134-be00-48ab-a2b9-28cce44cc28a" Mar 19 17:05:41 crc kubenswrapper[4792]: E0319 17:05:41.601354 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:cf7bcbc6648378b2d34252493fdfc4186d83f518a7e79106a86ea263e0630a0e" Mar 19 17:05:41 crc kubenswrapper[4792]: E0319 17:05:41.602043 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:cf7bcbc6648378b2d34252493fdfc4186d83f518a7e79106a86ea263e0630a0e,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lw97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(9f4ce965-a3ed-4d9f-918f-95ff40840ca5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.116356 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-44cgh"] Mar 19 17:05:42 crc kubenswrapper[4792]: W0319 17:05:42.253795 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46fc890d_ef4d_49ec_8f22_5200a9ec6167.slice/crio-381c9717efc4001bbaede22936da81a9254e6cc615614a33e88a252db9c42ea7 WatchSource:0}: Error finding container 381c9717efc4001bbaede22936da81a9254e6cc615614a33e88a252db9c42ea7: Status 404 returned error can't find the container with id 381c9717efc4001bbaede22936da81a9254e6cc615614a33e88a252db9c42ea7 Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.686569 4792 generic.go:334] "Generic (PLEG): container finished" podID="1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" containerID="eef8e9e656d05d5b86f662857b27079cfc80f74c545fb7703e950042854ef60f" exitCode=0 Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.686960 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" event={"ID":"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9","Type":"ContainerDied","Data":"eef8e9e656d05d5b86f662857b27079cfc80f74c545fb7703e950042854ef60f"} Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.688851 4792 generic.go:334] "Generic (PLEG): container finished" podID="2aaf8c82-efe8-424c-8e61-e0c418980262" containerID="2735d335aba388d12b3bc62bb62381d3bb14a00bf39f991f3b3353e4821f14cd" exitCode=0 Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.688876 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" event={"ID":"2aaf8c82-efe8-424c-8e61-e0c418980262","Type":"ContainerDied","Data":"2735d335aba388d12b3bc62bb62381d3bb14a00bf39f991f3b3353e4821f14cd"} Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.690405 4792 generic.go:334] "Generic (PLEG): container finished" podID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerID="4f40aeb995baeef34d23e27eab06dd491d35e701838e17ef1531ced337351c6c" exitCode=0 Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.690453 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575","Type":"ContainerDied","Data":"4f40aeb995baeef34d23e27eab06dd491d35e701838e17ef1531ced337351c6c"} Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.703100 4792 generic.go:334] "Generic (PLEG): container finished" podID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" containerID="fc3db147dc4b7038005a2c97125e27861605de5a085332b46a0c8fa71d463842" exitCode=0 Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.703167 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" event={"ID":"ca036e7f-6937-42d4-8d72-6a4ebfb8b789","Type":"ContainerDied","Data":"fc3db147dc4b7038005a2c97125e27861605de5a085332b46a0c8fa71d463842"} Mar 19 17:05:42 crc kubenswrapper[4792]: I0319 17:05:42.708435 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-44cgh" event={"ID":"46fc890d-ef4d-49ec-8f22-5200a9ec6167","Type":"ContainerStarted","Data":"381c9717efc4001bbaede22936da81a9254e6cc615614a33e88a252db9c42ea7"} Mar 19 17:05:43 crc kubenswrapper[4792]: E0319 17:05:43.052396 4792 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 19 17:05:43 crc kubenswrapper[4792]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/2aaf8c82-efe8-424c-8e61-e0c418980262/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 19 17:05:43 crc kubenswrapper[4792]: > podSandboxID="893fff7973d7f64792bea973d43f2aba1a36c30aac4a52b7102eaf2e8e01a1d3" Mar 19 17:05:43 crc kubenswrapper[4792]: E0319 17:05:43.052866 4792 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 19 17:05:43 crc kubenswrapper[4792]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh4m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-9sjzh_openstack(2aaf8c82-efe8-424c-8e61-e0c418980262): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/2aaf8c82-efe8-424c-8e61-e0c418980262/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 19 17:05:43 crc kubenswrapper[4792]: > logger="UnhandledError" Mar 19 17:05:43 crc kubenswrapper[4792]: E0319 17:05:43.059749 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/2aaf8c82-efe8-424c-8e61-e0c418980262/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" podUID="2aaf8c82-efe8-424c-8e61-e0c418980262" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.107766 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.159467 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tk2zq"] Mar 19 17:05:43 crc kubenswrapper[4792]: E0319 17:05:43.159974 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" containerName="init" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.159994 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" containerName="init" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.160177 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" containerName="init" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.160931 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.163261 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.197623 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tk2zq"] Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.236733 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhvdl\" (UniqueName: \"kubernetes.io/projected/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-kube-api-access-zhvdl\") pod \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.237298 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-dns-svc\") pod \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.238575 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-config\") pod \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\" (UID: \"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9\") " Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.241219 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9c2d\" (UniqueName: \"kubernetes.io/projected/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-kube-api-access-w9c2d\") pod \"root-account-create-update-tk2zq\" (UID: \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\") " pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.241508 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-operator-scripts\") pod \"root-account-create-update-tk2zq\" (UID: \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\") " pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.244459 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-kube-api-access-zhvdl" (OuterVolumeSpecName: "kube-api-access-zhvdl") pod "1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" (UID: "1a03e279-0d5d-4baf-99e2-dd1c3ba441a9"). InnerVolumeSpecName "kube-api-access-zhvdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.262378 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" (UID: "1a03e279-0d5d-4baf-99e2-dd1c3ba441a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.263734 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-config" (OuterVolumeSpecName: "config") pod "1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" (UID: "1a03e279-0d5d-4baf-99e2-dd1c3ba441a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.354682 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9c2d\" (UniqueName: \"kubernetes.io/projected/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-kube-api-access-w9c2d\") pod \"root-account-create-update-tk2zq\" (UID: \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\") " pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.354806 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-operator-scripts\") pod \"root-account-create-update-tk2zq\" (UID: \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\") " pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.354966 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.354979 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhvdl\" (UniqueName: \"kubernetes.io/projected/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-kube-api-access-zhvdl\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.354988 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.358580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-operator-scripts\") pod \"root-account-create-update-tk2zq\" (UID: \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\") " pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.413577 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9c2d\" (UniqueName: \"kubernetes.io/projected/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-kube-api-access-w9c2d\") pod \"root-account-create-update-tk2zq\" (UID: \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\") " pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.507365 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.718875 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575","Type":"ContainerStarted","Data":"55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6"} Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.722446 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" event={"ID":"ca036e7f-6937-42d4-8d72-6a4ebfb8b789","Type":"ContainerStarted","Data":"244f908981f749670b7f8ab9ada809e485f41b42786ecbcc1ee7c39f84d3ed71"} Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.722963 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.725030 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" event={"ID":"1a03e279-0d5d-4baf-99e2-dd1c3ba441a9","Type":"ContainerDied","Data":"66069c7bd3a83c7db4b5bef485e40e2c00425d823817b75cefad550e20ac3d6b"} Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.725070 4792 scope.go:117] "RemoveContainer" containerID="eef8e9e656d05d5b86f662857b27079cfc80f74c545fb7703e950042854ef60f" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.725179 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qdtgt" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.750160 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371974.104637 podStartE2EDuration="1m2.750139228s" podCreationTimestamp="2026-03-19 17:04:41 +0000 UTC" firstStartedPulling="2026-03-19 17:04:44.036320354 +0000 UTC m=+1447.182377894" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:43.745332856 +0000 UTC m=+1506.891390396" watchObservedRunningTime="2026-03-19 17:05:43.750139228 +0000 UTC m=+1506.896196768" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.778051 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" podStartSLOduration=5.618331122 podStartE2EDuration="16.778026374s" podCreationTimestamp="2026-03-19 17:05:27 +0000 UTC" firstStartedPulling="2026-03-19 17:05:30.516480193 +0000 UTC m=+1493.662537733" lastFinishedPulling="2026-03-19 17:05:41.676175445 +0000 UTC m=+1504.822232985" observedRunningTime="2026-03-19 17:05:43.768224394 +0000 UTC m=+1506.914281934" watchObservedRunningTime="2026-03-19 17:05:43.778026374 +0000 UTC m=+1506.924083914" Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.838969 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qdtgt"] Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.856994 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qdtgt"] Mar 19 17:05:43 crc kubenswrapper[4792]: I0319 17:05:43.984634 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tk2zq"] Mar 19 17:05:44 crc kubenswrapper[4792]: I0319 17:05:44.561502 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5dc4d84b9d-c6zml" podUID="36d88883-fb51-4f6b-9d19-f7e312f3a9af" containerName="console" containerID="cri-o://f11d1ffdadbf8cd04905518f77efc8605e1ff57362e3bd7de1516e3beea87adf" gracePeriod=15 Mar 19 17:05:44 crc kubenswrapper[4792]: I0319 17:05:44.622963 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:05:44 crc kubenswrapper[4792]: I0319 17:05:44.734302 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dc4d84b9d-c6zml_36d88883-fb51-4f6b-9d19-f7e312f3a9af/console/0.log" Mar 19 17:05:44 crc kubenswrapper[4792]: I0319 17:05:44.734348 4792 generic.go:334] "Generic (PLEG): container finished" podID="36d88883-fb51-4f6b-9d19-f7e312f3a9af" containerID="f11d1ffdadbf8cd04905518f77efc8605e1ff57362e3bd7de1516e3beea87adf" exitCode=2 Mar 19 17:05:44 crc kubenswrapper[4792]: I0319 17:05:44.734414 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dc4d84b9d-c6zml" event={"ID":"36d88883-fb51-4f6b-9d19-f7e312f3a9af","Type":"ContainerDied","Data":"f11d1ffdadbf8cd04905518f77efc8605e1ff57362e3bd7de1516e3beea87adf"} Mar 19 17:05:44 crc kubenswrapper[4792]: I0319 17:05:44.737119 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerStarted","Data":"cd0abb866fd23b8c463291a13648f842ac5d9e6a62f969963ee38ddf353f3bc1"} Mar 19 17:05:44 crc kubenswrapper[4792]: I0319 17:05:44.793570 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:05:44 crc kubenswrapper[4792]: E0319 17:05:44.793774 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 17:05:44 crc kubenswrapper[4792]: E0319 17:05:44.793795 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 17:05:44 crc kubenswrapper[4792]: E0319 17:05:44.793856 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift podName:797388ae-9d68-43cc-9e1b-063da11e1a5a nodeName:}" failed. No retries permitted until 2026-03-19 17:06:00.793826748 +0000 UTC m=+1523.939884288 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift") pod "swift-storage-0" (UID: "797388ae-9d68-43cc-9e1b-063da11e1a5a") : configmap "swift-ring-files" not found Mar 19 17:05:45 crc kubenswrapper[4792]: I0319 17:05:45.750589 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a03e279-0d5d-4baf-99e2-dd1c3ba441a9" path="/var/lib/kubelet/pods/1a03e279-0d5d-4baf-99e2-dd1c3ba441a9/volumes" Mar 19 17:05:45 crc kubenswrapper[4792]: I0319 17:05:45.751655 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tk2zq" event={"ID":"879f2f03-2c21-4cd7-9a25-f5e13cb028e6","Type":"ContainerStarted","Data":"6196446712af900c26e665cebd0c458afdc2c1b3272ce4eeeaf7b2bddf0d822f"} Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.492774 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dc4d84b9d-c6zml_36d88883-fb51-4f6b-9d19-f7e312f3a9af/console/0.log" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.493288 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.641606 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-config\") pod \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.641924 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-oauth-config\") pod \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.642007 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-oauth-serving-cert\") pod \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.642136 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-service-ca\") pod \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.642209 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-trusted-ca-bundle\") pod \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.642326 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-serving-cert\") pod \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.642413 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9hb2\" (UniqueName: \"kubernetes.io/projected/36d88883-fb51-4f6b-9d19-f7e312f3a9af-kube-api-access-q9hb2\") pod \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\" (UID: \"36d88883-fb51-4f6b-9d19-f7e312f3a9af\") " Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.643598 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-config" (OuterVolumeSpecName: "console-config") pod "36d88883-fb51-4f6b-9d19-f7e312f3a9af" (UID: "36d88883-fb51-4f6b-9d19-f7e312f3a9af"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.644039 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-service-ca" (OuterVolumeSpecName: "service-ca") pod "36d88883-fb51-4f6b-9d19-f7e312f3a9af" (UID: "36d88883-fb51-4f6b-9d19-f7e312f3a9af"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.644257 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "36d88883-fb51-4f6b-9d19-f7e312f3a9af" (UID: "36d88883-fb51-4f6b-9d19-f7e312f3a9af"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.644417 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "36d88883-fb51-4f6b-9d19-f7e312f3a9af" (UID: "36d88883-fb51-4f6b-9d19-f7e312f3a9af"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.647327 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d88883-fb51-4f6b-9d19-f7e312f3a9af-kube-api-access-q9hb2" (OuterVolumeSpecName: "kube-api-access-q9hb2") pod "36d88883-fb51-4f6b-9d19-f7e312f3a9af" (UID: "36d88883-fb51-4f6b-9d19-f7e312f3a9af"). InnerVolumeSpecName "kube-api-access-q9hb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.647462 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "36d88883-fb51-4f6b-9d19-f7e312f3a9af" (UID: "36d88883-fb51-4f6b-9d19-f7e312f3a9af"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.649239 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "36d88883-fb51-4f6b-9d19-f7e312f3a9af" (UID: "36d88883-fb51-4f6b-9d19-f7e312f3a9af"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.744517 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.744554 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.744565 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-service-ca\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.744575 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.744584 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.744594 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9hb2\" (UniqueName: \"kubernetes.io/projected/36d88883-fb51-4f6b-9d19-f7e312f3a9af-kube-api-access-q9hb2\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.744603 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/36d88883-fb51-4f6b-9d19-f7e312f3a9af-console-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.756310 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-44cgh" event={"ID":"46fc890d-ef4d-49ec-8f22-5200a9ec6167","Type":"ContainerStarted","Data":"34361215a01dd0e303ccd24fc88f909bd8a0c169ecbb0981a341425e951ae357"} Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.766013 4792 generic.go:334] "Generic (PLEG): container finished" podID="879f2f03-2c21-4cd7-9a25-f5e13cb028e6" containerID="ada9040d47fab10e3e29ed4fa5620eabc6bd3429768e1cff41bfe3f6feb55372" exitCode=0 Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.766223 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tk2zq" event={"ID":"879f2f03-2c21-4cd7-9a25-f5e13cb028e6","Type":"ContainerDied","Data":"ada9040d47fab10e3e29ed4fa5620eabc6bd3429768e1cff41bfe3f6feb55372"} Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.768116 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dc4d84b9d-c6zml_36d88883-fb51-4f6b-9d19-f7e312f3a9af/console/0.log" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.768264 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dc4d84b9d-c6zml" event={"ID":"36d88883-fb51-4f6b-9d19-f7e312f3a9af","Type":"ContainerDied","Data":"db84af2788cd3b370b8aca56ccc401f44e8286d693617e12a5eb8a3fe57f0319"} Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.768431 4792 scope.go:117] "RemoveContainer" containerID="f11d1ffdadbf8cd04905518f77efc8605e1ff57362e3bd7de1516e3beea87adf" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.768292 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dc4d84b9d-c6zml" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.783363 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-44cgh" podStartSLOduration=10.971846143 podStartE2EDuration="14.783339154s" podCreationTimestamp="2026-03-19 17:05:32 +0000 UTC" firstStartedPulling="2026-03-19 17:05:42.256793204 +0000 UTC m=+1505.402850744" lastFinishedPulling="2026-03-19 17:05:46.068286215 +0000 UTC m=+1509.214343755" observedRunningTime="2026-03-19 17:05:46.774473451 +0000 UTC m=+1509.920530991" watchObservedRunningTime="2026-03-19 17:05:46.783339154 +0000 UTC m=+1509.929396694" Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.832525 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5dc4d84b9d-c6zml"] Mar 19 17:05:46 crc kubenswrapper[4792]: I0319 17:05:46.842411 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5dc4d84b9d-c6zml"] Mar 19 17:05:47 crc kubenswrapper[4792]: I0319 17:05:47.755535 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d88883-fb51-4f6b-9d19-f7e312f3a9af" path="/var/lib/kubelet/pods/36d88883-fb51-4f6b-9d19-f7e312f3a9af/volumes" Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.241914 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.385899 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-operator-scripts\") pod \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\" (UID: \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\") " Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.386282 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9c2d\" (UniqueName: \"kubernetes.io/projected/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-kube-api-access-w9c2d\") pod \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\" (UID: \"879f2f03-2c21-4cd7-9a25-f5e13cb028e6\") " Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.387208 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "879f2f03-2c21-4cd7-9a25-f5e13cb028e6" (UID: "879f2f03-2c21-4cd7-9a25-f5e13cb028e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.395046 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-kube-api-access-w9c2d" (OuterVolumeSpecName: "kube-api-access-w9c2d") pod "879f2f03-2c21-4cd7-9a25-f5e13cb028e6" (UID: "879f2f03-2c21-4cd7-9a25-f5e13cb028e6"). InnerVolumeSpecName "kube-api-access-w9c2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:48 crc kubenswrapper[4792]: E0319 17:05:48.423566 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.490184 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.490559 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9c2d\" (UniqueName: \"kubernetes.io/projected/879f2f03-2c21-4cd7-9a25-f5e13cb028e6-kube-api-access-w9c2d\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.786163 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerStarted","Data":"776a0bacbac4ae1eac591d9c7210c05c54b0c823ef91c0bdd03109e84e6d2412"} Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.787365 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tk2zq" event={"ID":"879f2f03-2c21-4cd7-9a25-f5e13cb028e6","Type":"ContainerDied","Data":"6196446712af900c26e665cebd0c458afdc2c1b3272ce4eeeaf7b2bddf0d822f"} Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.787390 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6196446712af900c26e665cebd0c458afdc2c1b3272ce4eeeaf7b2bddf0d822f" Mar 19 17:05:48 crc kubenswrapper[4792]: I0319 17:05:48.787414 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tk2zq" Mar 19 17:05:48 crc kubenswrapper[4792]: E0319 17:05:48.788654 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:cf7bcbc6648378b2d34252493fdfc4186d83f518a7e79106a86ea263e0630a0e\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" Mar 19 17:05:49 crc kubenswrapper[4792]: E0319 17:05:49.795891 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:cf7bcbc6648378b2d34252493fdfc4186d83f518a7e79106a86ea263e0630a0e\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.558539 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hkjvd" podUID="55ea5748-5aed-4ae4-a590-94a23170b160" containerName="ovn-controller" probeResult="failure" output=< Mar 19 17:05:50 crc kubenswrapper[4792]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 19 17:05:50 crc kubenswrapper[4792]: > Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.616378 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.628798 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-56rd9" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.749664 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gbzs2"] Mar 19 17:05:50 crc kubenswrapper[4792]: E0319 17:05:50.762037 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879f2f03-2c21-4cd7-9a25-f5e13cb028e6" containerName="mariadb-account-create-update" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.762063 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="879f2f03-2c21-4cd7-9a25-f5e13cb028e6" containerName="mariadb-account-create-update" Mar 19 17:05:50 crc kubenswrapper[4792]: E0319 17:05:50.762082 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d88883-fb51-4f6b-9d19-f7e312f3a9af" containerName="console" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.762089 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d88883-fb51-4f6b-9d19-f7e312f3a9af" containerName="console" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.762287 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="879f2f03-2c21-4cd7-9a25-f5e13cb028e6" containerName="mariadb-account-create-update" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.762306 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d88883-fb51-4f6b-9d19-f7e312f3a9af" containerName="console" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.763019 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.764541 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gbzs2"] Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.768490 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.806209 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce9f56e3-2b21-4854-ada6-3c81b790ccab","Type":"ContainerStarted","Data":"86ab351866d1c103e0c0f3ab9d5f8e598c483571bf8453966372908f7df4d978"} Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.806517 4792 generic.go:334] "Generic (PLEG): container finished" podID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerID="a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec" exitCode=0 Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.806569 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3daeb97c-0c99-4d2c-8d07-5b168bf010d9","Type":"ContainerDied","Data":"a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec"} Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.808553 4792 generic.go:334] "Generic (PLEG): container finished" podID="886bf823-6964-4a71-807d-2b448201fc5e" containerID="bc1b64f0e6128b699c99dc8dcb63e408b32a3cd1bb88f233cb5b2f619cde4569" exitCode=0 Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.809559 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"886bf823-6964-4a71-807d-2b448201fc5e","Type":"ContainerDied","Data":"bc1b64f0e6128b699c99dc8dcb63e408b32a3cd1bb88f233cb5b2f619cde4569"} Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.829570 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.892700208 podStartE2EDuration="59.829548319s" podCreationTimestamp="2026-03-19 17:04:51 +0000 UTC" firstStartedPulling="2026-03-19 17:05:11.493559541 +0000 UTC m=+1474.639617071" lastFinishedPulling="2026-03-19 17:05:50.430407642 +0000 UTC m=+1513.576465182" observedRunningTime="2026-03-19 17:05:50.82740872 +0000 UTC m=+1513.973466260" watchObservedRunningTime="2026-03-19 17:05:50.829548319 +0000 UTC m=+1513.975605859" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.836564 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-ovs-rundir\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.836643 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-config\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.836749 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.837540 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csb9g\" (UniqueName: \"kubernetes.io/projected/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-kube-api-access-csb9g\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.837639 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-ovn-rundir\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.840336 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-combined-ca-bundle\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.960220 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csb9g\" (UniqueName: \"kubernetes.io/projected/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-kube-api-access-csb9g\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.960274 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-ovn-rundir\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.960331 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-combined-ca-bundle\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.960391 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-ovs-rundir\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.960415 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-config\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.960465 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.961656 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-ovn-rundir\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.964141 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-ovs-rundir\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.964701 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-config\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.968608 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-combined-ca-bundle\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.976117 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:50 crc kubenswrapper[4792]: I0319 17:05:50.985776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csb9g\" (UniqueName: \"kubernetes.io/projected/f7da4046-dc50-4b5b-a8bc-aecf5628f7ca-kube-api-access-csb9g\") pod \"ovn-controller-metrics-gbzs2\" (UID: \"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca\") " pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.047430 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9sjzh"] Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.082308 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gbzs2" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.110762 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5ps76"] Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.115334 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.133901 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5ps76"] Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.140927 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.170050 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.170111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.170197 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-config\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.170269 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vz8v\" (UniqueName: \"kubernetes.io/projected/9f62387d-4adf-4685-a9ce-dbc93745f149-kube-api-access-8vz8v\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.271858 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.271966 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-config\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.272059 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vz8v\" (UniqueName: \"kubernetes.io/projected/9f62387d-4adf-4685-a9ce-dbc93745f149-kube-api-access-8vz8v\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.272157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.274132 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.274782 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-config\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.284775 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.295446 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vz8v\" (UniqueName: \"kubernetes.io/projected/9f62387d-4adf-4685-a9ce-dbc93745f149-kube-api-access-8vz8v\") pod \"dnsmasq-dns-57d65f699f-5ps76\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.340049 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-qsdc7"] Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.340302 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" podUID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" containerName="dnsmasq-dns" containerID="cri-o://244f908981f749670b7f8ab9ada809e485f41b42786ecbcc1ee7c39f84d3ed71" gracePeriod=10 Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.342078 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.428905 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-m4ldr"] Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.430756 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.442254 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.482102 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.489956 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8crvq\" (UniqueName: \"kubernetes.io/projected/eb6e8887-3924-4571-8733-6e3bf3a47454-kube-api-access-8crvq\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.490067 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-config\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.490091 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.490210 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.490253 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.494513 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-m4ldr"] Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.591613 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8crvq\" (UniqueName: \"kubernetes.io/projected/eb6e8887-3924-4571-8733-6e3bf3a47454-kube-api-access-8crvq\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.592115 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-config\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.592145 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.592290 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.592347 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.593082 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-config\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.593723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.593753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.594407 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.637067 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8crvq\" (UniqueName: \"kubernetes.io/projected/eb6e8887-3924-4571-8733-6e3bf3a47454-kube-api-access-8crvq\") pod \"dnsmasq-dns-b8fbc5445-m4ldr\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.711645 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.719161 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.800228 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-config\") pod \"2aaf8c82-efe8-424c-8e61-e0c418980262\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.800268 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-dns-svc\") pod \"2aaf8c82-efe8-424c-8e61-e0c418980262\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.807244 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gbzs2"] Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.862373 4792 generic.go:334] "Generic (PLEG): container finished" podID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" containerID="244f908981f749670b7f8ab9ada809e485f41b42786ecbcc1ee7c39f84d3ed71" exitCode=0 Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.862596 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" event={"ID":"ca036e7f-6937-42d4-8d72-6a4ebfb8b789","Type":"ContainerDied","Data":"244f908981f749670b7f8ab9ada809e485f41b42786ecbcc1ee7c39f84d3ed71"} Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.866813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" event={"ID":"2aaf8c82-efe8-424c-8e61-e0c418980262","Type":"ContainerDied","Data":"893fff7973d7f64792bea973d43f2aba1a36c30aac4a52b7102eaf2e8e01a1d3"} Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.866855 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9sjzh" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.866873 4792 scope.go:117] "RemoveContainer" containerID="2735d335aba388d12b3bc62bb62381d3bb14a00bf39f991f3b3353e4821f14cd" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.871476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gbzs2" event={"ID":"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca","Type":"ContainerStarted","Data":"fcc5dccc5534adfe870306f11827a8818ee46ab9847e4b1f34891159fe565c25"} Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.875224 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae950307-1857-4a46-ab98-55843387f128","Type":"ContainerDied","Data":"7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e"} Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.875421 4792 generic.go:334] "Generic (PLEG): container finished" podID="ae950307-1857-4a46-ab98-55843387f128" containerID="7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e" exitCode=0 Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.880944 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2aaf8c82-efe8-424c-8e61-e0c418980262" (UID: "2aaf8c82-efe8-424c-8e61-e0c418980262"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.890377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3daeb97c-0c99-4d2c-8d07-5b168bf010d9","Type":"ContainerStarted","Data":"db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544"} Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.891800 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.893269 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"886bf823-6964-4a71-807d-2b448201fc5e","Type":"ContainerStarted","Data":"6e6db9b8741f1d33191e512ae255863a39fcf3e2a5412c3bddb2247e63fca59a"} Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.894444 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.901453 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh4m7\" (UniqueName: \"kubernetes.io/projected/2aaf8c82-efe8-424c-8e61-e0c418980262-kube-api-access-zh4m7\") pod \"2aaf8c82-efe8-424c-8e61-e0c418980262\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.905034 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-config" (OuterVolumeSpecName: "config") pod "2aaf8c82-efe8-424c-8e61-e0c418980262" (UID: "2aaf8c82-efe8-424c-8e61-e0c418980262"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:51 crc kubenswrapper[4792]: W0319 17:05:51.905949 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2aaf8c82-efe8-424c-8e61-e0c418980262/volumes/kubernetes.io~configmap/config Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.905981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-config" (OuterVolumeSpecName: "config") pod "2aaf8c82-efe8-424c-8e61-e0c418980262" (UID: "2aaf8c82-efe8-424c-8e61-e0c418980262"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.910727 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-config\") pod \"2aaf8c82-efe8-424c-8e61-e0c418980262\" (UID: \"2aaf8c82-efe8-424c-8e61-e0c418980262\") " Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.911614 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.911633 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2aaf8c82-efe8-424c-8e61-e0c418980262-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:51 crc kubenswrapper[4792]: I0319 17:05:51.911988 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aaf8c82-efe8-424c-8e61-e0c418980262-kube-api-access-zh4m7" (OuterVolumeSpecName: "kube-api-access-zh4m7") pod "2aaf8c82-efe8-424c-8e61-e0c418980262" (UID: "2aaf8c82-efe8-424c-8e61-e0c418980262"). InnerVolumeSpecName "kube-api-access-zh4m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.028407 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh4m7\" (UniqueName: \"kubernetes.io/projected/2aaf8c82-efe8-424c-8e61-e0c418980262-kube-api-access-zh4m7\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.062832 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=39.535856299 podStartE2EDuration="1m12.062810984s" podCreationTimestamp="2026-03-19 17:04:40 +0000 UTC" firstStartedPulling="2026-03-19 17:04:42.704769732 +0000 UTC m=+1445.850827272" lastFinishedPulling="2026-03-19 17:05:15.231724417 +0000 UTC m=+1478.377781957" observedRunningTime="2026-03-19 17:05:51.988009341 +0000 UTC m=+1515.134066891" watchObservedRunningTime="2026-03-19 17:05:52.062810984 +0000 UTC m=+1515.208868524" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.082743 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.543722989 podStartE2EDuration="1m12.08272175s" podCreationTimestamp="2026-03-19 17:04:40 +0000 UTC" firstStartedPulling="2026-03-19 17:04:43.162189595 +0000 UTC m=+1446.308247135" lastFinishedPulling="2026-03-19 17:05:15.701188316 +0000 UTC m=+1478.847245896" observedRunningTime="2026-03-19 17:05:52.047541804 +0000 UTC m=+1515.193599344" watchObservedRunningTime="2026-03-19 17:05:52.08272175 +0000 UTC m=+1515.228779290" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.282828 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9sjzh"] Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.310291 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9sjzh"] Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.524609 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5ps76"] Mar 19 17:05:52 crc kubenswrapper[4792]: W0319 17:05:52.529560 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f62387d_4adf_4685_a9ce_dbc93745f149.slice/crio-275c20b9e7dc57badcc3bb301d7369dcc6a9dcc5742cc9b541d9834ff7560a2d WatchSource:0}: Error finding container 275c20b9e7dc57badcc3bb301d7369dcc6a9dcc5742cc9b541d9834ff7560a2d: Status 404 returned error can't find the container with id 275c20b9e7dc57badcc3bb301d7369dcc6a9dcc5742cc9b541d9834ff7560a2d Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.654092 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-m4ldr"] Mar 19 17:05:52 crc kubenswrapper[4792]: W0319 17:05:52.658095 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb6e8887_3924_4571_8733_6e3bf3a47454.slice/crio-5908fbd8eef541488d3b830694a6d131c60db55983a69653f6ea3a52f4d84239 WatchSource:0}: Error finding container 5908fbd8eef541488d3b830694a6d131c60db55983a69653f6ea3a52f4d84239: Status 404 returned error can't find the container with id 5908fbd8eef541488d3b830694a6d131c60db55983a69653f6ea3a52f4d84239 Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.705275 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.875570 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-config\") pod \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.875613 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pljcj\" (UniqueName: \"kubernetes.io/projected/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-kube-api-access-pljcj\") pod \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.875713 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-dns-svc\") pod \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\" (UID: \"ca036e7f-6937-42d4-8d72-6a4ebfb8b789\") " Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.883446 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-kube-api-access-pljcj" (OuterVolumeSpecName: "kube-api-access-pljcj") pod "ca036e7f-6937-42d4-8d72-6a4ebfb8b789" (UID: "ca036e7f-6937-42d4-8d72-6a4ebfb8b789"). InnerVolumeSpecName "kube-api-access-pljcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.923258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" event={"ID":"ca036e7f-6937-42d4-8d72-6a4ebfb8b789","Type":"ContainerDied","Data":"771e2506200fe4cea9e7ecf7daa114de59a5d3935dd77b8a5f90d14392c84378"} Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.923590 4792 scope.go:117] "RemoveContainer" containerID="244f908981f749670b7f8ab9ada809e485f41b42786ecbcc1ee7c39f84d3ed71" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.923717 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-qsdc7" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.934322 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" event={"ID":"9f62387d-4adf-4685-a9ce-dbc93745f149","Type":"ContainerStarted","Data":"275c20b9e7dc57badcc3bb301d7369dcc6a9dcc5742cc9b541d9834ff7560a2d"} Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.949897 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca036e7f-6937-42d4-8d72-6a4ebfb8b789" (UID: "ca036e7f-6937-42d4-8d72-6a4ebfb8b789"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.975219 4792 scope.go:117] "RemoveContainer" containerID="fc3db147dc4b7038005a2c97125e27861605de5a085332b46a0c8fa71d463842" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.978014 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pljcj\" (UniqueName: \"kubernetes.io/projected/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-kube-api-access-pljcj\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:52 crc kubenswrapper[4792]: I0319 17:05:52.978041 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.003106 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae950307-1857-4a46-ab98-55843387f128","Type":"ContainerStarted","Data":"00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5"} Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.004004 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.016701 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gbzs2" event={"ID":"f7da4046-dc50-4b5b-a8bc-aecf5628f7ca","Type":"ContainerStarted","Data":"794c53efb7c1a7164610f01867b992ba2821ff4809950f5afd1c45ce1e67321d"} Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.031926 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" event={"ID":"eb6e8887-3924-4571-8733-6e3bf3a47454","Type":"ContainerStarted","Data":"5908fbd8eef541488d3b830694a6d131c60db55983a69653f6ea3a52f4d84239"} Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.033066 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-config" (OuterVolumeSpecName: "config") pod "ca036e7f-6937-42d4-8d72-6a4ebfb8b789" (UID: "ca036e7f-6937-42d4-8d72-6a4ebfb8b789"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.050610 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.800941011 podStartE2EDuration="1m13.05058561s" podCreationTimestamp="2026-03-19 17:04:40 +0000 UTC" firstStartedPulling="2026-03-19 17:04:42.3887376 +0000 UTC m=+1445.534795150" lastFinishedPulling="2026-03-19 17:05:16.638382209 +0000 UTC m=+1479.784439749" observedRunningTime="2026-03-19 17:05:53.037942053 +0000 UTC m=+1516.183999603" watchObservedRunningTime="2026-03-19 17:05:53.05058561 +0000 UTC m=+1516.196643160" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.115415 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca036e7f-6937-42d4-8d72-6a4ebfb8b789-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.133129 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gbzs2" podStartSLOduration=3.133107184 podStartE2EDuration="3.133107184s" podCreationTimestamp="2026-03-19 17:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:53.09759558 +0000 UTC m=+1516.243653120" watchObservedRunningTime="2026-03-19 17:05:53.133107184 +0000 UTC m=+1516.279164724" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.295699 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-qsdc7"] Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.313067 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-qsdc7"] Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.419889 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.420397 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.586176 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.750688 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aaf8c82-efe8-424c-8e61-e0c418980262" path="/var/lib/kubelet/pods/2aaf8c82-efe8-424c-8e61-e0c418980262/volumes" Mar 19 17:05:53 crc kubenswrapper[4792]: I0319 17:05:53.751459 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" path="/var/lib/kubelet/pods/ca036e7f-6937-42d4-8d72-6a4ebfb8b789/volumes" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.034279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2f154134-be00-48ab-a2b9-28cce44cc28a","Type":"ContainerStarted","Data":"fb2633b4bf4fcb62238a6707217b199ce1d0de7d9d6a6a7c5067ee71e7825bda"} Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.035862 4792 generic.go:334] "Generic (PLEG): container finished" podID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerID="4ff3ae105c28d1d24dde1d2ed774b42ab2b3a65611d7e3095d9b2d4b2e46d91a" exitCode=0 Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.035971 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" event={"ID":"eb6e8887-3924-4571-8733-6e3bf3a47454","Type":"ContainerDied","Data":"4ff3ae105c28d1d24dde1d2ed774b42ab2b3a65611d7e3095d9b2d4b2e46d91a"} Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.036004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" event={"ID":"eb6e8887-3924-4571-8733-6e3bf3a47454","Type":"ContainerStarted","Data":"fc542e4924e7e32506d74040ac72fa74f8f9199dba68f18047a2145a61a7b577"} Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.036056 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.038018 4792 generic.go:334] "Generic (PLEG): container finished" podID="9f62387d-4adf-4685-a9ce-dbc93745f149" containerID="93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392" exitCode=0 Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.038148 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" event={"ID":"9f62387d-4adf-4685-a9ce-dbc93745f149","Type":"ContainerDied","Data":"93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392"} Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.068659 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=55.295008125 podStartE2EDuration="1m1.068642677s" podCreationTimestamp="2026-03-19 17:04:53 +0000 UTC" firstStartedPulling="2026-03-19 17:05:10.331759897 +0000 UTC m=+1473.477817437" lastFinishedPulling="2026-03-19 17:05:16.105394449 +0000 UTC m=+1479.251451989" observedRunningTime="2026-03-19 17:05:54.064204424 +0000 UTC m=+1517.210261974" watchObservedRunningTime="2026-03-19 17:05:54.068642677 +0000 UTC m=+1517.214700217" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.152044 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" podStartSLOduration=3.152024046 podStartE2EDuration="3.152024046s" podCreationTimestamp="2026-03-19 17:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:54.138418713 +0000 UTC m=+1517.284476253" watchObservedRunningTime="2026-03-19 17:05:54.152024046 +0000 UTC m=+1517.298081586" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.230742 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.312319 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 19 17:05:54 crc kubenswrapper[4792]: E0319 17:05:54.312701 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aaf8c82-efe8-424c-8e61-e0c418980262" containerName="init" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.312717 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aaf8c82-efe8-424c-8e61-e0c418980262" containerName="init" Mar 19 17:05:54 crc kubenswrapper[4792]: E0319 17:05:54.312753 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" containerName="dnsmasq-dns" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.312760 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" containerName="dnsmasq-dns" Mar 19 17:05:54 crc kubenswrapper[4792]: E0319 17:05:54.312768 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" containerName="init" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.312774 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" containerName="init" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.316873 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aaf8c82-efe8-424c-8e61-e0c418980262" containerName="init" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.316909 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca036e7f-6937-42d4-8d72-6a4ebfb8b789" containerName="dnsmasq-dns" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.318015 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.333569 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.333791 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jhppr" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.333917 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.334025 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.345140 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hkjvd-config-7tzpk"] Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.346436 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.346476 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c80c62c-85e8-4de7-984b-eac919232564-config\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.346516 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.346566 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c80c62c-85e8-4de7-984b-eac919232564-scripts\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.346591 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.346645 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.346671 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvgs\" (UniqueName: \"kubernetes.io/projected/7c80c62c-85e8-4de7-984b-eac919232564-kube-api-access-8xvgs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.346690 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c80c62c-85e8-4de7-984b-eac919232564-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.350096 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.353617 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.380351 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hkjvd-config-7tzpk"] Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.453913 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c80c62c-85e8-4de7-984b-eac919232564-scripts\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.453960 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7c56\" (UniqueName: \"kubernetes.io/projected/77a33bb0-077b-4fd6-a000-2bb90dccd2be-kube-api-access-p7c56\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.453993 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454025 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454075 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvgs\" (UniqueName: \"kubernetes.io/projected/7c80c62c-85e8-4de7-984b-eac919232564-kube-api-access-8xvgs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454117 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c80c62c-85e8-4de7-984b-eac919232564-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454141 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-log-ovn\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454169 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-additional-scripts\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454226 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-scripts\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454263 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c80c62c-85e8-4de7-984b-eac919232564-config\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454316 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run-ovn\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.454703 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c80c62c-85e8-4de7-984b-eac919232564-scripts\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.455283 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c80c62c-85e8-4de7-984b-eac919232564-config\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.455816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c80c62c-85e8-4de7-984b-eac919232564-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.465266 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.473019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.483946 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c80c62c-85e8-4de7-984b-eac919232564-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.487527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvgs\" (UniqueName: \"kubernetes.io/projected/7c80c62c-85e8-4de7-984b-eac919232564-kube-api-access-8xvgs\") pod \"ovn-northd-0\" (UID: \"7c80c62c-85e8-4de7-984b-eac919232564\") " pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.556222 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-scripts\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.556312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run-ovn\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.556352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7c56\" (UniqueName: \"kubernetes.io/projected/77a33bb0-077b-4fd6-a000-2bb90dccd2be-kube-api-access-p7c56\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.556384 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.556484 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-log-ovn\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.556512 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-additional-scripts\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.557080 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.557113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run-ovn\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.557149 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-log-ovn\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.557314 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-additional-scripts\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.566813 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-scripts\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.575487 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7c56\" (UniqueName: \"kubernetes.io/projected/77a33bb0-077b-4fd6-a000-2bb90dccd2be-kube-api-access-p7c56\") pod \"ovn-controller-hkjvd-config-7tzpk\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.645334 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 19 17:05:54 crc kubenswrapper[4792]: I0319 17:05:54.673677 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.059192 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" event={"ID":"9f62387d-4adf-4685-a9ce-dbc93745f149","Type":"ContainerStarted","Data":"30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0"} Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.060825 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.064181 4792 generic.go:334] "Generic (PLEG): container finished" podID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerID="eb4a7be4f50be7354e01d506a44dffdf85cda1ccc2a413dfca36b1e0196c8715" exitCode=0 Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.064816 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2","Type":"ContainerDied","Data":"eb4a7be4f50be7354e01d506a44dffdf85cda1ccc2a413dfca36b1e0196c8715"} Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.103024 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" podStartSLOduration=4.103005611 podStartE2EDuration="4.103005611s" podCreationTimestamp="2026-03-19 17:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:55.101760378 +0000 UTC m=+1518.247817938" watchObservedRunningTime="2026-03-19 17:05:55.103005611 +0000 UTC m=+1518.249063151" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.269618 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.458748 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hkjvd-config-7tzpk"] Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.598127 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hkjvd" podUID="55ea5748-5aed-4ae4-a590-94a23170b160" containerName="ovn-controller" probeResult="failure" output=< Mar 19 17:05:55 crc kubenswrapper[4792]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 19 17:05:55 crc kubenswrapper[4792]: > Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.686294 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rd64m"] Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.688411 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rd64m" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.717105 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rd64m"] Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.803815 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-eeeb-account-create-update-jr92l"] Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.805070 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.811249 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c423805e-1778-42a6-a1ac-b44254aa03fe-operator-scripts\") pod \"keystone-db-create-rd64m\" (UID: \"c423805e-1778-42a6-a1ac-b44254aa03fe\") " pod="openstack/keystone-db-create-rd64m" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.811533 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljzc5\" (UniqueName: \"kubernetes.io/projected/c423805e-1778-42a6-a1ac-b44254aa03fe-kube-api-access-ljzc5\") pod \"keystone-db-create-rd64m\" (UID: \"c423805e-1778-42a6-a1ac-b44254aa03fe\") " pod="openstack/keystone-db-create-rd64m" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.812632 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.842082 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eeeb-account-create-update-jr92l"] Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.914081 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c423805e-1778-42a6-a1ac-b44254aa03fe-operator-scripts\") pod \"keystone-db-create-rd64m\" (UID: \"c423805e-1778-42a6-a1ac-b44254aa03fe\") " pod="openstack/keystone-db-create-rd64m" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.915025 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-operator-scripts\") pod \"keystone-eeeb-account-create-update-jr92l\" (UID: \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\") " pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.915197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljzc5\" (UniqueName: \"kubernetes.io/projected/c423805e-1778-42a6-a1ac-b44254aa03fe-kube-api-access-ljzc5\") pod \"keystone-db-create-rd64m\" (UID: \"c423805e-1778-42a6-a1ac-b44254aa03fe\") " pod="openstack/keystone-db-create-rd64m" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.915360 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkp6w\" (UniqueName: \"kubernetes.io/projected/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-kube-api-access-bkp6w\") pod \"keystone-eeeb-account-create-update-jr92l\" (UID: \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\") " pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.915529 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c423805e-1778-42a6-a1ac-b44254aa03fe-operator-scripts\") pod \"keystone-db-create-rd64m\" (UID: \"c423805e-1778-42a6-a1ac-b44254aa03fe\") " pod="openstack/keystone-db-create-rd64m" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.942601 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljzc5\" (UniqueName: \"kubernetes.io/projected/c423805e-1778-42a6-a1ac-b44254aa03fe-kube-api-access-ljzc5\") pod \"keystone-db-create-rd64m\" (UID: \"c423805e-1778-42a6-a1ac-b44254aa03fe\") " pod="openstack/keystone-db-create-rd64m" Mar 19 17:05:55 crc kubenswrapper[4792]: I0319 17:05:55.999419 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-h8f7f"] Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.001168 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-h8f7f" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.014751 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rd64m" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.015877 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-h8f7f"] Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.016819 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-operator-scripts\") pod \"keystone-eeeb-account-create-update-jr92l\" (UID: \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\") " pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.017027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkp6w\" (UniqueName: \"kubernetes.io/projected/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-kube-api-access-bkp6w\") pod \"keystone-eeeb-account-create-update-jr92l\" (UID: \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\") " pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.020141 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-operator-scripts\") pod \"keystone-eeeb-account-create-update-jr92l\" (UID: \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\") " pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.055039 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkp6w\" (UniqueName: \"kubernetes.io/projected/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-kube-api-access-bkp6w\") pod \"keystone-eeeb-account-create-update-jr92l\" (UID: \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\") " pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.119811 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd49301f-b993-4978-ad6a-393fc9fdcb64-operator-scripts\") pod \"placement-db-create-h8f7f\" (UID: \"fd49301f-b993-4978-ad6a-393fc9fdcb64\") " pod="openstack/placement-db-create-h8f7f" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.119865 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcqq\" (UniqueName: \"kubernetes.io/projected/fd49301f-b993-4978-ad6a-393fc9fdcb64-kube-api-access-rwcqq\") pod \"placement-db-create-h8f7f\" (UID: \"fd49301f-b993-4978-ad6a-393fc9fdcb64\") " pod="openstack/placement-db-create-h8f7f" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.120569 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.128209 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a28a-account-create-update-pxpnr"] Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.129622 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.130636 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hkjvd-config-7tzpk" event={"ID":"77a33bb0-077b-4fd6-a000-2bb90dccd2be","Type":"ContainerStarted","Data":"ff5e5c192fab00a3f88c460818fa989e6d942376d4d422d4594566a5ab19c012"} Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.130666 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hkjvd-config-7tzpk" event={"ID":"77a33bb0-077b-4fd6-a000-2bb90dccd2be","Type":"ContainerStarted","Data":"86fb71982703e9184b799e876b0e11ce21eeea641d663e007790b69fb398b39a"} Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.132869 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.140295 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a28a-account-create-update-pxpnr"] Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.153270 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2","Type":"ContainerStarted","Data":"e8db222d890264fc685f1a1f921fae4f10cc91b5bca90eb6aab73ed3f1e1b91c"} Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.153951 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.174009 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c80c62c-85e8-4de7-984b-eac919232564","Type":"ContainerStarted","Data":"1a8fce2558238500afa26240980716191f0de7c828520dc05b7f76456ac3220f"} Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.181217 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hkjvd-config-7tzpk" podStartSLOduration=2.181198349 podStartE2EDuration="2.181198349s" podCreationTimestamp="2026-03-19 17:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:56.174528127 +0000 UTC m=+1519.320585677" watchObservedRunningTime="2026-03-19 17:05:56.181198349 +0000 UTC m=+1519.327255879" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.210570 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371960.644226 podStartE2EDuration="1m16.210549965s" podCreationTimestamp="2026-03-19 17:04:40 +0000 UTC" firstStartedPulling="2026-03-19 17:04:42.700079883 +0000 UTC m=+1445.846137423" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:56.196328175 +0000 UTC m=+1519.342385715" watchObservedRunningTime="2026-03-19 17:05:56.210549965 +0000 UTC m=+1519.356607505" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.223319 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16f1378-b929-44f2-a851-c4de7620ae5b-operator-scripts\") pod \"placement-a28a-account-create-update-pxpnr\" (UID: \"e16f1378-b929-44f2-a851-c4de7620ae5b\") " pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.223997 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd49301f-b993-4978-ad6a-393fc9fdcb64-operator-scripts\") pod \"placement-db-create-h8f7f\" (UID: \"fd49301f-b993-4978-ad6a-393fc9fdcb64\") " pod="openstack/placement-db-create-h8f7f" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.224074 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwcqq\" (UniqueName: \"kubernetes.io/projected/fd49301f-b993-4978-ad6a-393fc9fdcb64-kube-api-access-rwcqq\") pod \"placement-db-create-h8f7f\" (UID: \"fd49301f-b993-4978-ad6a-393fc9fdcb64\") " pod="openstack/placement-db-create-h8f7f" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.224891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mwhb\" (UniqueName: \"kubernetes.io/projected/e16f1378-b929-44f2-a851-c4de7620ae5b-kube-api-access-9mwhb\") pod \"placement-a28a-account-create-update-pxpnr\" (UID: \"e16f1378-b929-44f2-a851-c4de7620ae5b\") " pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.225445 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd49301f-b993-4978-ad6a-393fc9fdcb64-operator-scripts\") pod \"placement-db-create-h8f7f\" (UID: \"fd49301f-b993-4978-ad6a-393fc9fdcb64\") " pod="openstack/placement-db-create-h8f7f" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.252255 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwcqq\" (UniqueName: \"kubernetes.io/projected/fd49301f-b993-4978-ad6a-393fc9fdcb64-kube-api-access-rwcqq\") pod \"placement-db-create-h8f7f\" (UID: \"fd49301f-b993-4978-ad6a-393fc9fdcb64\") " pod="openstack/placement-db-create-h8f7f" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.326096 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-h8f7f" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.326666 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mwhb\" (UniqueName: \"kubernetes.io/projected/e16f1378-b929-44f2-a851-c4de7620ae5b-kube-api-access-9mwhb\") pod \"placement-a28a-account-create-update-pxpnr\" (UID: \"e16f1378-b929-44f2-a851-c4de7620ae5b\") " pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.326761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16f1378-b929-44f2-a851-c4de7620ae5b-operator-scripts\") pod \"placement-a28a-account-create-update-pxpnr\" (UID: \"e16f1378-b929-44f2-a851-c4de7620ae5b\") " pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.327926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16f1378-b929-44f2-a851-c4de7620ae5b-operator-scripts\") pod \"placement-a28a-account-create-update-pxpnr\" (UID: \"e16f1378-b929-44f2-a851-c4de7620ae5b\") " pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.350893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mwhb\" (UniqueName: \"kubernetes.io/projected/e16f1378-b929-44f2-a851-c4de7620ae5b-kube-api-access-9mwhb\") pod \"placement-a28a-account-create-update-pxpnr\" (UID: \"e16f1378-b929-44f2-a851-c4de7620ae5b\") " pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.479281 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.755359 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rd64m"] Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.947231 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eeeb-account-create-update-jr92l"] Mar 19 17:05:56 crc kubenswrapper[4792]: I0319 17:05:56.987279 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-h8f7f"] Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.114350 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cbpz5"] Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.116098 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.127003 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cbpz5"] Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.153214 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34502e69-2af6-4cbd-8854-753cd654fc49-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cbpz5\" (UID: \"34502e69-2af6-4cbd-8854-753cd654fc49\") " pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.153276 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkpg\" (UniqueName: \"kubernetes.io/projected/34502e69-2af6-4cbd-8854-753cd654fc49-kube-api-access-pbkpg\") pod \"mysqld-exporter-openstack-db-create-cbpz5\" (UID: \"34502e69-2af6-4cbd-8854-753cd654fc49\") " pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.193190 4792 generic.go:334] "Generic (PLEG): container finished" podID="77a33bb0-077b-4fd6-a000-2bb90dccd2be" containerID="ff5e5c192fab00a3f88c460818fa989e6d942376d4d422d4594566a5ab19c012" exitCode=0 Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.193484 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hkjvd-config-7tzpk" event={"ID":"77a33bb0-077b-4fd6-a000-2bb90dccd2be","Type":"ContainerDied","Data":"ff5e5c192fab00a3f88c460818fa989e6d942376d4d422d4594566a5ab19c012"} Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.204402 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eeeb-account-create-update-jr92l" event={"ID":"66ccf938-23a0-4851-8cc4-a30bc91fdf3a","Type":"ContainerStarted","Data":"fa19cd2430aac9fc22533e5d7811fc13ef2520ced13be2303f6e87c7101015a5"} Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.216691 4792 generic.go:334] "Generic (PLEG): container finished" podID="46fc890d-ef4d-49ec-8f22-5200a9ec6167" containerID="34361215a01dd0e303ccd24fc88f909bd8a0c169ecbb0981a341425e951ae357" exitCode=0 Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.216805 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-44cgh" event={"ID":"46fc890d-ef4d-49ec-8f22-5200a9ec6167","Type":"ContainerDied","Data":"34361215a01dd0e303ccd24fc88f909bd8a0c169ecbb0981a341425e951ae357"} Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.226533 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-h8f7f" event={"ID":"fd49301f-b993-4978-ad6a-393fc9fdcb64","Type":"ContainerStarted","Data":"7c8a94784327739d3f4269f3ee823f6620d94b3e56bcca43aec6bf98d638502c"} Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.229075 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-bf32-account-create-update-gwgzp"] Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.235306 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rd64m" event={"ID":"c423805e-1778-42a6-a1ac-b44254aa03fe","Type":"ContainerStarted","Data":"28d7b8fda4fe20ed08ac788846fc922ad7b15904f8b6c58ee2eb695ab36fc647"} Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.235483 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rd64m" event={"ID":"c423805e-1778-42a6-a1ac-b44254aa03fe","Type":"ContainerStarted","Data":"ea775aed8aed616820ab4425644dc4276cf1f50985deb45718bfc8fd730bb0db"} Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.235617 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.240461 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.254800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34502e69-2af6-4cbd-8854-753cd654fc49-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cbpz5\" (UID: \"34502e69-2af6-4cbd-8854-753cd654fc49\") " pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.254853 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbkpg\" (UniqueName: \"kubernetes.io/projected/34502e69-2af6-4cbd-8854-753cd654fc49-kube-api-access-pbkpg\") pod \"mysqld-exporter-openstack-db-create-cbpz5\" (UID: \"34502e69-2af6-4cbd-8854-753cd654fc49\") " pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.254914 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtfx4\" (UniqueName: \"kubernetes.io/projected/ce715d39-f1ec-46f6-be8b-de76de850a77-kube-api-access-dtfx4\") pod \"mysqld-exporter-bf32-account-create-update-gwgzp\" (UID: \"ce715d39-f1ec-46f6-be8b-de76de850a77\") " pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.255056 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce715d39-f1ec-46f6-be8b-de76de850a77-operator-scripts\") pod \"mysqld-exporter-bf32-account-create-update-gwgzp\" (UID: \"ce715d39-f1ec-46f6-be8b-de76de850a77\") " pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.256044 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34502e69-2af6-4cbd-8854-753cd654fc49-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cbpz5\" (UID: \"34502e69-2af6-4cbd-8854-753cd654fc49\") " pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.266122 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-bf32-account-create-update-gwgzp"] Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.277057 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a28a-account-create-update-pxpnr"] Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.301691 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbkpg\" (UniqueName: \"kubernetes.io/projected/34502e69-2af6-4cbd-8854-753cd654fc49-kube-api-access-pbkpg\") pod \"mysqld-exporter-openstack-db-create-cbpz5\" (UID: \"34502e69-2af6-4cbd-8854-753cd654fc49\") " pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.318025 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-rd64m" podStartSLOduration=2.318005747 podStartE2EDuration="2.318005747s" podCreationTimestamp="2026-03-19 17:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:57.305818533 +0000 UTC m=+1520.451876073" watchObservedRunningTime="2026-03-19 17:05:57.318005747 +0000 UTC m=+1520.464063287" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.356022 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtfx4\" (UniqueName: \"kubernetes.io/projected/ce715d39-f1ec-46f6-be8b-de76de850a77-kube-api-access-dtfx4\") pod \"mysqld-exporter-bf32-account-create-update-gwgzp\" (UID: \"ce715d39-f1ec-46f6-be8b-de76de850a77\") " pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.356181 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce715d39-f1ec-46f6-be8b-de76de850a77-operator-scripts\") pod \"mysqld-exporter-bf32-account-create-update-gwgzp\" (UID: \"ce715d39-f1ec-46f6-be8b-de76de850a77\") " pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.357081 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce715d39-f1ec-46f6-be8b-de76de850a77-operator-scripts\") pod \"mysqld-exporter-bf32-account-create-update-gwgzp\" (UID: \"ce715d39-f1ec-46f6-be8b-de76de850a77\") " pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.375166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtfx4\" (UniqueName: \"kubernetes.io/projected/ce715d39-f1ec-46f6-be8b-de76de850a77-kube-api-access-dtfx4\") pod \"mysqld-exporter-bf32-account-create-update-gwgzp\" (UID: \"ce715d39-f1ec-46f6-be8b-de76de850a77\") " pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.433770 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.562255 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:05:57 crc kubenswrapper[4792]: I0319 17:05:57.950422 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cbpz5"] Mar 19 17:05:57 crc kubenswrapper[4792]: W0319 17:05:57.986343 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34502e69_2af6_4cbd_8854_753cd654fc49.slice/crio-3edf67fc5655eede3c51d9401d4f33deb3b505708a46ce998f9de62dda2add2b WatchSource:0}: Error finding container 3edf67fc5655eede3c51d9401d4f33deb3b505708a46ce998f9de62dda2add2b: Status 404 returned error can't find the container with id 3edf67fc5655eede3c51d9401d4f33deb3b505708a46ce998f9de62dda2add2b Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.192977 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-bf32-account-create-update-gwgzp"] Mar 19 17:05:58 crc kubenswrapper[4792]: W0319 17:05:58.199652 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce715d39_f1ec_46f6_be8b_de76de850a77.slice/crio-b37a0dbdae6b4d3280979d3eaa02bc899d7eb6652c3a214ecdf0d1069a5b54a0 WatchSource:0}: Error finding container b37a0dbdae6b4d3280979d3eaa02bc899d7eb6652c3a214ecdf0d1069a5b54a0: Status 404 returned error can't find the container with id b37a0dbdae6b4d3280979d3eaa02bc899d7eb6652c3a214ecdf0d1069a5b54a0 Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.256350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" event={"ID":"34502e69-2af6-4cbd-8854-753cd654fc49","Type":"ContainerStarted","Data":"5d34c95811a754bd895a6883376101a277ed72f1df976aac47b6f0107cdeae95"} Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.256727 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" event={"ID":"34502e69-2af6-4cbd-8854-753cd654fc49","Type":"ContainerStarted","Data":"3edf67fc5655eede3c51d9401d4f33deb3b505708a46ce998f9de62dda2add2b"} Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.290343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eeeb-account-create-update-jr92l" event={"ID":"66ccf938-23a0-4851-8cc4-a30bc91fdf3a","Type":"ContainerStarted","Data":"c2d4783ebd3950583d6c59450d4d0dac69bdd8819b685dae1917d852b004270e"} Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.312732 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" podStartSLOduration=1.312711483 podStartE2EDuration="1.312711483s" podCreationTimestamp="2026-03-19 17:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:58.295799289 +0000 UTC m=+1521.441856849" watchObservedRunningTime="2026-03-19 17:05:58.312711483 +0000 UTC m=+1521.458769023" Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.333412 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-h8f7f" event={"ID":"fd49301f-b993-4978-ad6a-393fc9fdcb64","Type":"ContainerStarted","Data":"185bdaf97883defdde6f3f1d6b8bf1c91e29b94c9a9315bd83b34345ff74cf2e"} Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.336035 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-eeeb-account-create-update-jr92l" podStartSLOduration=3.336018393 podStartE2EDuration="3.336018393s" podCreationTimestamp="2026-03-19 17:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:58.333237816 +0000 UTC m=+1521.479295356" watchObservedRunningTime="2026-03-19 17:05:58.336018393 +0000 UTC m=+1521.482075933" Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.383904 4792 generic.go:334] "Generic (PLEG): container finished" podID="c423805e-1778-42a6-a1ac-b44254aa03fe" containerID="28d7b8fda4fe20ed08ac788846fc922ad7b15904f8b6c58ee2eb695ab36fc647" exitCode=0 Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.384190 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rd64m" event={"ID":"c423805e-1778-42a6-a1ac-b44254aa03fe","Type":"ContainerDied","Data":"28d7b8fda4fe20ed08ac788846fc922ad7b15904f8b6c58ee2eb695ab36fc647"} Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.394375 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-h8f7f" podStartSLOduration=3.394354164 podStartE2EDuration="3.394354164s" podCreationTimestamp="2026-03-19 17:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:58.375384534 +0000 UTC m=+1521.521442074" watchObservedRunningTime="2026-03-19 17:05:58.394354164 +0000 UTC m=+1521.540411704" Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.426206 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a28a-account-create-update-pxpnr" event={"ID":"e16f1378-b929-44f2-a851-c4de7620ae5b","Type":"ContainerStarted","Data":"3ab5a8bd68453441589c53c619b9e992e73095eccb2ad26b200e6839b26da278"} Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.426262 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a28a-account-create-update-pxpnr" event={"ID":"e16f1378-b929-44f2-a851-c4de7620ae5b","Type":"ContainerStarted","Data":"7be99f9e98a046f9eefc3d0358c026149fd48ecbdbea0d4c8df487c4082de4a3"} Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.447340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" event={"ID":"ce715d39-f1ec-46f6-be8b-de76de850a77","Type":"ContainerStarted","Data":"b37a0dbdae6b4d3280979d3eaa02bc899d7eb6652c3a214ecdf0d1069a5b54a0"} Mar 19 17:05:58 crc kubenswrapper[4792]: I0319 17:05:58.489986 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-a28a-account-create-update-pxpnr" podStartSLOduration=2.489966069 podStartE2EDuration="2.489966069s" podCreationTimestamp="2026-03-19 17:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:05:58.489170098 +0000 UTC m=+1521.635227638" watchObservedRunningTime="2026-03-19 17:05:58.489966069 +0000 UTC m=+1521.636023609" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.274855 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.281109 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426557 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run\") pod \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426721 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-swiftconf\") pod \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426726 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run" (OuterVolumeSpecName: "var-run") pod "77a33bb0-077b-4fd6-a000-2bb90dccd2be" (UID: "77a33bb0-077b-4fd6-a000-2bb90dccd2be"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426777 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run-ovn\") pod \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426801 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-additional-scripts\") pod \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426875 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-scripts\") pod \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426905 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-scripts\") pod \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426934 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp949\" (UniqueName: \"kubernetes.io/projected/46fc890d-ef4d-49ec-8f22-5200a9ec6167-kube-api-access-cp949\") pod \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426984 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-combined-ca-bundle\") pod \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.427028 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46fc890d-ef4d-49ec-8f22-5200a9ec6167-etc-swift\") pod \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.427066 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-dispersionconf\") pod \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.427087 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-log-ovn\") pod \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.427134 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7c56\" (UniqueName: \"kubernetes.io/projected/77a33bb0-077b-4fd6-a000-2bb90dccd2be-kube-api-access-p7c56\") pod \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\" (UID: \"77a33bb0-077b-4fd6-a000-2bb90dccd2be\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.427150 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-ring-data-devices\") pod \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\" (UID: \"46fc890d-ef4d-49ec-8f22-5200a9ec6167\") " Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.427640 4792 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.426903 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "77a33bb0-077b-4fd6-a000-2bb90dccd2be" (UID: "77a33bb0-077b-4fd6-a000-2bb90dccd2be"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.428203 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "46fc890d-ef4d-49ec-8f22-5200a9ec6167" (UID: "46fc890d-ef4d-49ec-8f22-5200a9ec6167"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.428995 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "77a33bb0-077b-4fd6-a000-2bb90dccd2be" (UID: "77a33bb0-077b-4fd6-a000-2bb90dccd2be"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.429650 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fc890d-ef4d-49ec-8f22-5200a9ec6167-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "46fc890d-ef4d-49ec-8f22-5200a9ec6167" (UID: "46fc890d-ef4d-49ec-8f22-5200a9ec6167"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.430043 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "77a33bb0-077b-4fd6-a000-2bb90dccd2be" (UID: "77a33bb0-077b-4fd6-a000-2bb90dccd2be"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.430366 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-scripts" (OuterVolumeSpecName: "scripts") pod "77a33bb0-077b-4fd6-a000-2bb90dccd2be" (UID: "77a33bb0-077b-4fd6-a000-2bb90dccd2be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.529436 4792 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.529470 4792 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.529480 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77a33bb0-077b-4fd6-a000-2bb90dccd2be-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.529491 4792 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46fc890d-ef4d-49ec-8f22-5200a9ec6167-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.529500 4792 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77a33bb0-077b-4fd6-a000-2bb90dccd2be-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.529508 4792 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.811684 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "46fc890d-ef4d-49ec-8f22-5200a9ec6167" (UID: "46fc890d-ef4d-49ec-8f22-5200a9ec6167"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.811802 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a33bb0-077b-4fd6-a000-2bb90dccd2be-kube-api-access-p7c56" (OuterVolumeSpecName: "kube-api-access-p7c56") pod "77a33bb0-077b-4fd6-a000-2bb90dccd2be" (UID: "77a33bb0-077b-4fd6-a000-2bb90dccd2be"). InnerVolumeSpecName "kube-api-access-p7c56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.812168 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-scripts" (OuterVolumeSpecName: "scripts") pod "46fc890d-ef4d-49ec-8f22-5200a9ec6167" (UID: "46fc890d-ef4d-49ec-8f22-5200a9ec6167"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.813895 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fc890d-ef4d-49ec-8f22-5200a9ec6167-kube-api-access-cp949" (OuterVolumeSpecName: "kube-api-access-cp949") pod "46fc890d-ef4d-49ec-8f22-5200a9ec6167" (UID: "46fc890d-ef4d-49ec-8f22-5200a9ec6167"). InnerVolumeSpecName "kube-api-access-cp949". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.815345 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46fc890d-ef4d-49ec-8f22-5200a9ec6167" (UID: "46fc890d-ef4d-49ec-8f22-5200a9ec6167"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.819428 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "46fc890d-ef4d-49ec-8f22-5200a9ec6167" (UID: "46fc890d-ef4d-49ec-8f22-5200a9ec6167"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.840636 4792 generic.go:334] "Generic (PLEG): container finished" podID="e16f1378-b929-44f2-a851-c4de7620ae5b" containerID="3ab5a8bd68453441589c53c619b9e992e73095eccb2ad26b200e6839b26da278" exitCode=0 Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.840706 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a28a-account-create-update-pxpnr" event={"ID":"e16f1378-b929-44f2-a851-c4de7620ae5b","Type":"ContainerDied","Data":"3ab5a8bd68453441589c53c619b9e992e73095eccb2ad26b200e6839b26da278"} Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.854016 4792 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.854038 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fc890d-ef4d-49ec-8f22-5200a9ec6167-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.854051 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp949\" (UniqueName: \"kubernetes.io/projected/46fc890d-ef4d-49ec-8f22-5200a9ec6167-kube-api-access-cp949\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.854062 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.854071 4792 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46fc890d-ef4d-49ec-8f22-5200a9ec6167-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.854080 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7c56\" (UniqueName: \"kubernetes.io/projected/77a33bb0-077b-4fd6-a000-2bb90dccd2be-kube-api-access-p7c56\") on node \"crc\" DevicePath \"\"" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.863476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hkjvd-config-7tzpk" event={"ID":"77a33bb0-077b-4fd6-a000-2bb90dccd2be","Type":"ContainerDied","Data":"86fb71982703e9184b799e876b0e11ce21eeea641d663e007790b69fb398b39a"} Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.863581 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86fb71982703e9184b799e876b0e11ce21eeea641d663e007790b69fb398b39a" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.863724 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hkjvd-config-7tzpk" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.889244 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce715d39-f1ec-46f6-be8b-de76de850a77" containerID="64eaffe2b280b2e08a5d878475b75b106dbcfd412100fee0183c7b96844ffe5b" exitCode=0 Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.889441 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" event={"ID":"ce715d39-f1ec-46f6-be8b-de76de850a77","Type":"ContainerDied","Data":"64eaffe2b280b2e08a5d878475b75b106dbcfd412100fee0183c7b96844ffe5b"} Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.903985 4792 generic.go:334] "Generic (PLEG): container finished" podID="34502e69-2af6-4cbd-8854-753cd654fc49" containerID="5d34c95811a754bd895a6883376101a277ed72f1df976aac47b6f0107cdeae95" exitCode=0 Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.904191 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" event={"ID":"34502e69-2af6-4cbd-8854-753cd654fc49","Type":"ContainerDied","Data":"5d34c95811a754bd895a6883376101a277ed72f1df976aac47b6f0107cdeae95"} Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.906176 4792 generic.go:334] "Generic (PLEG): container finished" podID="66ccf938-23a0-4851-8cc4-a30bc91fdf3a" containerID="c2d4783ebd3950583d6c59450d4d0dac69bdd8819b685dae1917d852b004270e" exitCode=0 Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.906234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eeeb-account-create-update-jr92l" event={"ID":"66ccf938-23a0-4851-8cc4-a30bc91fdf3a","Type":"ContainerDied","Data":"c2d4783ebd3950583d6c59450d4d0dac69bdd8819b685dae1917d852b004270e"} Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.907659 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-44cgh" event={"ID":"46fc890d-ef4d-49ec-8f22-5200a9ec6167","Type":"ContainerDied","Data":"381c9717efc4001bbaede22936da81a9254e6cc615614a33e88a252db9c42ea7"} Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.907812 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-44cgh" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.908080 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="381c9717efc4001bbaede22936da81a9254e6cc615614a33e88a252db9c42ea7" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.916907 4792 generic.go:334] "Generic (PLEG): container finished" podID="fd49301f-b993-4978-ad6a-393fc9fdcb64" containerID="185bdaf97883defdde6f3f1d6b8bf1c91e29b94c9a9315bd83b34345ff74cf2e" exitCode=0 Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.917015 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-h8f7f" event={"ID":"fd49301f-b993-4978-ad6a-393fc9fdcb64","Type":"ContainerDied","Data":"185bdaf97883defdde6f3f1d6b8bf1c91e29b94c9a9315bd83b34345ff74cf2e"} Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.918101 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-sth98"] Mar 19 17:05:59 crc kubenswrapper[4792]: E0319 17:05:59.918661 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fc890d-ef4d-49ec-8f22-5200a9ec6167" containerName="swift-ring-rebalance" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.918683 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fc890d-ef4d-49ec-8f22-5200a9ec6167" containerName="swift-ring-rebalance" Mar 19 17:05:59 crc kubenswrapper[4792]: E0319 17:05:59.918704 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a33bb0-077b-4fd6-a000-2bb90dccd2be" containerName="ovn-config" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.918711 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a33bb0-077b-4fd6-a000-2bb90dccd2be" containerName="ovn-config" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.919018 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fc890d-ef4d-49ec-8f22-5200a9ec6167" containerName="swift-ring-rebalance" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.919037 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a33bb0-077b-4fd6-a000-2bb90dccd2be" containerName="ovn-config" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.920024 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sth98" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.955414 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sth98"] Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.956528 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hfjg\" (UniqueName: \"kubernetes.io/projected/946d010a-c057-4057-aa6a-87e4d739df92-kube-api-access-8hfjg\") pod \"glance-db-create-sth98\" (UID: \"946d010a-c057-4057-aa6a-87e4d739df92\") " pod="openstack/glance-db-create-sth98" Mar 19 17:05:59 crc kubenswrapper[4792]: I0319 17:05:59.956599 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d010a-c057-4057-aa6a-87e4d739df92-operator-scripts\") pod \"glance-db-create-sth98\" (UID: \"946d010a-c057-4057-aa6a-87e4d739df92\") " pod="openstack/glance-db-create-sth98" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.058227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hfjg\" (UniqueName: \"kubernetes.io/projected/946d010a-c057-4057-aa6a-87e4d739df92-kube-api-access-8hfjg\") pod \"glance-db-create-sth98\" (UID: \"946d010a-c057-4057-aa6a-87e4d739df92\") " pod="openstack/glance-db-create-sth98" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.058323 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d010a-c057-4057-aa6a-87e4d739df92-operator-scripts\") pod \"glance-db-create-sth98\" (UID: \"946d010a-c057-4057-aa6a-87e4d739df92\") " pod="openstack/glance-db-create-sth98" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.059351 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d010a-c057-4057-aa6a-87e4d739df92-operator-scripts\") pod \"glance-db-create-sth98\" (UID: \"946d010a-c057-4057-aa6a-87e4d739df92\") " pod="openstack/glance-db-create-sth98" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.082415 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hfjg\" (UniqueName: \"kubernetes.io/projected/946d010a-c057-4057-aa6a-87e4d739df92-kube-api-access-8hfjg\") pod \"glance-db-create-sth98\" (UID: \"946d010a-c057-4057-aa6a-87e4d739df92\") " pod="openstack/glance-db-create-sth98" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.175886 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-aede-account-create-update-4zdv8"] Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.177666 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.184742 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.189616 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-aede-account-create-update-4zdv8"] Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.206929 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565666-mmx49"] Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.209216 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565666-mmx49" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.216615 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.216854 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.217041 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.220195 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565666-mmx49"] Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.262662 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlljw\" (UniqueName: \"kubernetes.io/projected/175b0c5f-9753-4154-9086-e39e498077e5-kube-api-access-jlljw\") pod \"auto-csr-approver-29565666-mmx49\" (UID: \"175b0c5f-9753-4154-9086-e39e498077e5\") " pod="openshift-infra/auto-csr-approver-29565666-mmx49" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.262742 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sztj\" (UniqueName: \"kubernetes.io/projected/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-kube-api-access-4sztj\") pod \"glance-aede-account-create-update-4zdv8\" (UID: \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\") " pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.262869 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-operator-scripts\") pod \"glance-aede-account-create-update-4zdv8\" (UID: \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\") " pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.323735 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sth98" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.365248 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlljw\" (UniqueName: \"kubernetes.io/projected/175b0c5f-9753-4154-9086-e39e498077e5-kube-api-access-jlljw\") pod \"auto-csr-approver-29565666-mmx49\" (UID: \"175b0c5f-9753-4154-9086-e39e498077e5\") " pod="openshift-infra/auto-csr-approver-29565666-mmx49" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.365321 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sztj\" (UniqueName: \"kubernetes.io/projected/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-kube-api-access-4sztj\") pod \"glance-aede-account-create-update-4zdv8\" (UID: \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\") " pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.365426 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-operator-scripts\") pod \"glance-aede-account-create-update-4zdv8\" (UID: \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\") " pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.366111 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-operator-scripts\") pod \"glance-aede-account-create-update-4zdv8\" (UID: \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\") " pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.384735 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlljw\" (UniqueName: \"kubernetes.io/projected/175b0c5f-9753-4154-9086-e39e498077e5-kube-api-access-jlljw\") pod \"auto-csr-approver-29565666-mmx49\" (UID: \"175b0c5f-9753-4154-9086-e39e498077e5\") " pod="openshift-infra/auto-csr-approver-29565666-mmx49" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.385588 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sztj\" (UniqueName: \"kubernetes.io/projected/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-kube-api-access-4sztj\") pod \"glance-aede-account-create-update-4zdv8\" (UID: \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\") " pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.410295 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hkjvd-config-7tzpk"] Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.420602 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hkjvd-config-7tzpk"] Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.453623 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rd64m" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.512370 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.530740 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565666-mmx49" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.568558 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljzc5\" (UniqueName: \"kubernetes.io/projected/c423805e-1778-42a6-a1ac-b44254aa03fe-kube-api-access-ljzc5\") pod \"c423805e-1778-42a6-a1ac-b44254aa03fe\" (UID: \"c423805e-1778-42a6-a1ac-b44254aa03fe\") " Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.568728 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c423805e-1778-42a6-a1ac-b44254aa03fe-operator-scripts\") pod \"c423805e-1778-42a6-a1ac-b44254aa03fe\" (UID: \"c423805e-1778-42a6-a1ac-b44254aa03fe\") " Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.570323 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c423805e-1778-42a6-a1ac-b44254aa03fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c423805e-1778-42a6-a1ac-b44254aa03fe" (UID: "c423805e-1778-42a6-a1ac-b44254aa03fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.574914 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c423805e-1778-42a6-a1ac-b44254aa03fe-kube-api-access-ljzc5" (OuterVolumeSpecName: "kube-api-access-ljzc5") pod "c423805e-1778-42a6-a1ac-b44254aa03fe" (UID: "c423805e-1778-42a6-a1ac-b44254aa03fe"). InnerVolumeSpecName "kube-api-access-ljzc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.615792 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hkjvd" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.673960 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljzc5\" (UniqueName: \"kubernetes.io/projected/c423805e-1778-42a6-a1ac-b44254aa03fe-kube-api-access-ljzc5\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.674256 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c423805e-1778-42a6-a1ac-b44254aa03fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.880070 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.890472 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/797388ae-9d68-43cc-9e1b-063da11e1a5a-etc-swift\") pod \"swift-storage-0\" (UID: \"797388ae-9d68-43cc-9e1b-063da11e1a5a\") " pod="openstack/swift-storage-0" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.917475 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sth98"] Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.932366 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sth98" event={"ID":"946d010a-c057-4057-aa6a-87e4d739df92","Type":"ContainerStarted","Data":"bbc0a03ca82ea58c6a1064bf4982e78ad9b358d1d4e909832fe884776b6b044b"} Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.933787 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rd64m" event={"ID":"c423805e-1778-42a6-a1ac-b44254aa03fe","Type":"ContainerDied","Data":"ea775aed8aed616820ab4425644dc4276cf1f50985deb45718bfc8fd730bb0db"} Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.933830 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea775aed8aed616820ab4425644dc4276cf1f50985deb45718bfc8fd730bb0db" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.933896 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rd64m" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.939378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c80c62c-85e8-4de7-984b-eac919232564","Type":"ContainerStarted","Data":"9128d1cca76f1e1d1773a5641e7929413e7e699132b318e6bb8f301c79bd9cf9"} Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.939405 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c80c62c-85e8-4de7-984b-eac919232564","Type":"ContainerStarted","Data":"1af8eca205ea99cd2b418d5874a3fe94072cd9bc2d43d4d7408f75c4636395ff"} Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.939761 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 19 17:06:00 crc kubenswrapper[4792]: I0319 17:06:00.967173 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.111245752 podStartE2EDuration="6.967153832s" podCreationTimestamp="2026-03-19 17:05:54 +0000 UTC" firstStartedPulling="2026-03-19 17:05:55.278219122 +0000 UTC m=+1518.424276662" lastFinishedPulling="2026-03-19 17:05:59.134127202 +0000 UTC m=+1522.280184742" observedRunningTime="2026-03-19 17:06:00.96674433 +0000 UTC m=+1524.112801880" watchObservedRunningTime="2026-03-19 17:06:00.967153832 +0000 UTC m=+1524.113211382" Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.042401 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.210238 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-aede-account-create-update-4zdv8"] Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.484993 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.715033 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.737238 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.768984 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a33bb0-077b-4fd6-a000-2bb90dccd2be" path="/var/lib/kubelet/pods/77a33bb0-077b-4fd6-a000-2bb90dccd2be/volumes" Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.816506 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5ps76"] Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.892535 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tk2zq"] Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.910307 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.938978 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tk2zq"] Mar 19 17:06:01 crc kubenswrapper[4792]: I0319 17:06:01.961338 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.009987 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-h8f7f" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.012072 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16f1378-b929-44f2-a851-c4de7620ae5b-operator-scripts\") pod \"e16f1378-b929-44f2-a851-c4de7620ae5b\" (UID: \"e16f1378-b929-44f2-a851-c4de7620ae5b\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.012365 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mwhb\" (UniqueName: \"kubernetes.io/projected/e16f1378-b929-44f2-a851-c4de7620ae5b-kube-api-access-9mwhb\") pod \"e16f1378-b929-44f2-a851-c4de7620ae5b\" (UID: \"e16f1378-b929-44f2-a851-c4de7620ae5b\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.015052 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-h8f7f" event={"ID":"fd49301f-b993-4978-ad6a-393fc9fdcb64","Type":"ContainerDied","Data":"7c8a94784327739d3f4269f3ee823f6620d94b3e56bcca43aec6bf98d638502c"} Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.015102 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c8a94784327739d3f4269f3ee823f6620d94b3e56bcca43aec6bf98d638502c" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.016322 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16f1378-b929-44f2-a851-c4de7620ae5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e16f1378-b929-44f2-a851-c4de7620ae5b" (UID: "e16f1378-b929-44f2-a851-c4de7620ae5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.024131 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a28a-account-create-update-pxpnr" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.024349 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a28a-account-create-update-pxpnr" event={"ID":"e16f1378-b929-44f2-a851-c4de7620ae5b","Type":"ContainerDied","Data":"7be99f9e98a046f9eefc3d0358c026149fd48ecbdbea0d4c8df487c4082de4a3"} Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.024376 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be99f9e98a046f9eefc3d0358c026149fd48ecbdbea0d4c8df487c4082de4a3" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.038449 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" event={"ID":"ce715d39-f1ec-46f6-be8b-de76de850a77","Type":"ContainerDied","Data":"b37a0dbdae6b4d3280979d3eaa02bc899d7eb6652c3a214ecdf0d1069a5b54a0"} Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.038744 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37a0dbdae6b4d3280979d3eaa02bc899d7eb6652c3a214ecdf0d1069a5b54a0" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.038884 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf32-account-create-update-gwgzp" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.043270 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aede-account-create-update-4zdv8" event={"ID":"36cfc4db-45c9-4d06-9244-9fa9dc88b94b","Type":"ContainerStarted","Data":"48cd40e5cca64520aae05d0984acd1be08bb083b1211e8782cd56040bdac4f23"} Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.043311 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aede-account-create-update-4zdv8" event={"ID":"36cfc4db-45c9-4d06-9244-9fa9dc88b94b","Type":"ContainerStarted","Data":"568d3397db5d8cf365ccc1ab425f98e78634357616a7993f8d1ac94e3480b5aa"} Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.044343 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16f1378-b929-44f2-a851-c4de7620ae5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.047204 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sth98" event={"ID":"946d010a-c057-4057-aa6a-87e4d739df92","Type":"ContainerStarted","Data":"364d68280989e43a835519963ba37ac7a205fef71c3d808b9ad18eaddbaf008f"} Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.049443 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" podUID="9f62387d-4adf-4685-a9ce-dbc93745f149" containerName="dnsmasq-dns" containerID="cri-o://30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0" gracePeriod=10 Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.074064 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16f1378-b929-44f2-a851-c4de7620ae5b-kube-api-access-9mwhb" (OuterVolumeSpecName: "kube-api-access-9mwhb") pod "e16f1378-b929-44f2-a851-c4de7620ae5b" (UID: "e16f1378-b929-44f2-a851-c4de7620ae5b"). InnerVolumeSpecName "kube-api-access-9mwhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.076258 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jg2vl"] Mar 19 17:06:02 crc kubenswrapper[4792]: E0319 17:06:02.078634 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c423805e-1778-42a6-a1ac-b44254aa03fe" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.078663 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c423805e-1778-42a6-a1ac-b44254aa03fe" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: E0319 17:06:02.078685 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16f1378-b929-44f2-a851-c4de7620ae5b" containerName="mariadb-account-create-update" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.078694 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16f1378-b929-44f2-a851-c4de7620ae5b" containerName="mariadb-account-create-update" Mar 19 17:06:02 crc kubenswrapper[4792]: E0319 17:06:02.078730 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd49301f-b993-4978-ad6a-393fc9fdcb64" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.078736 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd49301f-b993-4978-ad6a-393fc9fdcb64" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: E0319 17:06:02.078779 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce715d39-f1ec-46f6-be8b-de76de850a77" containerName="mariadb-account-create-update" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.078788 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce715d39-f1ec-46f6-be8b-de76de850a77" containerName="mariadb-account-create-update" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.079667 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce715d39-f1ec-46f6-be8b-de76de850a77" containerName="mariadb-account-create-update" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.079854 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd49301f-b993-4978-ad6a-393fc9fdcb64" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.080117 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c423805e-1778-42a6-a1ac-b44254aa03fe" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.080205 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16f1378-b929-44f2-a851-c4de7620ae5b" containerName="mariadb-account-create-update" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.080417 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:06:02 crc kubenswrapper[4792]: E0319 17:06:02.081018 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34502e69-2af6-4cbd-8854-753cd654fc49" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.081093 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="34502e69-2af6-4cbd-8854-753cd654fc49" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.081539 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="34502e69-2af6-4cbd-8854-753cd654fc49" containerName="mariadb-database-create" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.085489 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.150033 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jg2vl"] Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.150446 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce715d39-f1ec-46f6-be8b-de76de850a77-operator-scripts\") pod \"ce715d39-f1ec-46f6-be8b-de76de850a77\" (UID: \"ce715d39-f1ec-46f6-be8b-de76de850a77\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.150880 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce715d39-f1ec-46f6-be8b-de76de850a77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce715d39-f1ec-46f6-be8b-de76de850a77" (UID: "ce715d39-f1ec-46f6-be8b-de76de850a77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.150895 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtfx4\" (UniqueName: \"kubernetes.io/projected/ce715d39-f1ec-46f6-be8b-de76de850a77-kube-api-access-dtfx4\") pod \"ce715d39-f1ec-46f6-be8b-de76de850a77\" (UID: \"ce715d39-f1ec-46f6-be8b-de76de850a77\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.150995 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwcqq\" (UniqueName: \"kubernetes.io/projected/fd49301f-b993-4978-ad6a-393fc9fdcb64-kube-api-access-rwcqq\") pod \"fd49301f-b993-4978-ad6a-393fc9fdcb64\" (UID: \"fd49301f-b993-4978-ad6a-393fc9fdcb64\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.151046 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd49301f-b993-4978-ad6a-393fc9fdcb64-operator-scripts\") pod \"fd49301f-b993-4978-ad6a-393fc9fdcb64\" (UID: \"fd49301f-b993-4978-ad6a-393fc9fdcb64\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.151937 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce715d39-f1ec-46f6-be8b-de76de850a77-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.151955 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mwhb\" (UniqueName: \"kubernetes.io/projected/e16f1378-b929-44f2-a851-c4de7620ae5b-kube-api-access-9mwhb\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.154245 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd49301f-b993-4978-ad6a-393fc9fdcb64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd49301f-b993-4978-ad6a-393fc9fdcb64" (UID: "fd49301f-b993-4978-ad6a-393fc9fdcb64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.167751 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd49301f-b993-4978-ad6a-393fc9fdcb64-kube-api-access-rwcqq" (OuterVolumeSpecName: "kube-api-access-rwcqq") pod "fd49301f-b993-4978-ad6a-393fc9fdcb64" (UID: "fd49301f-b993-4978-ad6a-393fc9fdcb64"). InnerVolumeSpecName "kube-api-access-rwcqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.168572 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce715d39-f1ec-46f6-be8b-de76de850a77-kube-api-access-dtfx4" (OuterVolumeSpecName: "kube-api-access-dtfx4") pod "ce715d39-f1ec-46f6-be8b-de76de850a77" (UID: "ce715d39-f1ec-46f6-be8b-de76de850a77"). InnerVolumeSpecName "kube-api-access-dtfx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.220155 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9q76v"] Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.221611 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.228428 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.252915 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34502e69-2af6-4cbd-8854-753cd654fc49-operator-scripts\") pod \"34502e69-2af6-4cbd-8854-753cd654fc49\" (UID: \"34502e69-2af6-4cbd-8854-753cd654fc49\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.253004 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbkpg\" (UniqueName: \"kubernetes.io/projected/34502e69-2af6-4cbd-8854-753cd654fc49-kube-api-access-pbkpg\") pod \"34502e69-2af6-4cbd-8854-753cd654fc49\" (UID: \"34502e69-2af6-4cbd-8854-753cd654fc49\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.253542 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-utilities\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.253584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-catalog-content\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.253692 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vw6s\" (UniqueName: \"kubernetes.io/projected/5d5868a6-fe98-44f9-908f-a5c9335098b1-kube-api-access-9vw6s\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.253785 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtfx4\" (UniqueName: \"kubernetes.io/projected/ce715d39-f1ec-46f6-be8b-de76de850a77-kube-api-access-dtfx4\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.253796 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwcqq\" (UniqueName: \"kubernetes.io/projected/fd49301f-b993-4978-ad6a-393fc9fdcb64-kube-api-access-rwcqq\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.253807 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd49301f-b993-4978-ad6a-393fc9fdcb64-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.254189 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34502e69-2af6-4cbd-8854-753cd654fc49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34502e69-2af6-4cbd-8854-753cd654fc49" (UID: "34502e69-2af6-4cbd-8854-753cd654fc49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.254196 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.267174 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34502e69-2af6-4cbd-8854-753cd654fc49-kube-api-access-pbkpg" (OuterVolumeSpecName: "kube-api-access-pbkpg") pod "34502e69-2af6-4cbd-8854-753cd654fc49" (UID: "34502e69-2af6-4cbd-8854-753cd654fc49"). InnerVolumeSpecName "kube-api-access-pbkpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.278185 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9q76v"] Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.290634 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-aede-account-create-update-4zdv8" podStartSLOduration=2.290617473 podStartE2EDuration="2.290617473s" podCreationTimestamp="2026-03-19 17:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:02.100310729 +0000 UTC m=+1525.246368259" watchObservedRunningTime="2026-03-19 17:06:02.290617473 +0000 UTC m=+1525.436675013" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.355437 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a1f309-b03a-483d-8df7-7539bba505a1-operator-scripts\") pod \"root-account-create-update-9q76v\" (UID: \"b8a1f309-b03a-483d-8df7-7539bba505a1\") " pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.355516 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-utilities\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.355570 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-catalog-content\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.355685 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vw6s\" (UniqueName: \"kubernetes.io/projected/5d5868a6-fe98-44f9-908f-a5c9335098b1-kube-api-access-9vw6s\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.355752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnm7\" (UniqueName: \"kubernetes.io/projected/b8a1f309-b03a-483d-8df7-7539bba505a1-kube-api-access-zjnm7\") pod \"root-account-create-update-9q76v\" (UID: \"b8a1f309-b03a-483d-8df7-7539bba505a1\") " pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.355876 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34502e69-2af6-4cbd-8854-753cd654fc49-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.355893 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbkpg\" (UniqueName: \"kubernetes.io/projected/34502e69-2af6-4cbd-8854-753cd654fc49-kube-api-access-pbkpg\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.356032 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565666-mmx49"] Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.356347 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-utilities\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.356488 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-catalog-content\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.381439 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.388732 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vw6s\" (UniqueName: \"kubernetes.io/projected/5d5868a6-fe98-44f9-908f-a5c9335098b1-kube-api-access-9vw6s\") pod \"redhat-operators-jg2vl\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.458914 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjnm7\" (UniqueName: \"kubernetes.io/projected/b8a1f309-b03a-483d-8df7-7539bba505a1-kube-api-access-zjnm7\") pod \"root-account-create-update-9q76v\" (UID: \"b8a1f309-b03a-483d-8df7-7539bba505a1\") " pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.459030 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a1f309-b03a-483d-8df7-7539bba505a1-operator-scripts\") pod \"root-account-create-update-9q76v\" (UID: \"b8a1f309-b03a-483d-8df7-7539bba505a1\") " pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.460051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a1f309-b03a-483d-8df7-7539bba505a1-operator-scripts\") pod \"root-account-create-update-9q76v\" (UID: \"b8a1f309-b03a-483d-8df7-7539bba505a1\") " pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.482376 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.494693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjnm7\" (UniqueName: \"kubernetes.io/projected/b8a1f309-b03a-483d-8df7-7539bba505a1-kube-api-access-zjnm7\") pod \"root-account-create-update-9q76v\" (UID: \"b8a1f309-b03a-483d-8df7-7539bba505a1\") " pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.553805 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.560888 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkp6w\" (UniqueName: \"kubernetes.io/projected/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-kube-api-access-bkp6w\") pod \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\" (UID: \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.561268 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-operator-scripts\") pod \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\" (UID: \"66ccf938-23a0-4851-8cc4-a30bc91fdf3a\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.562490 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66ccf938-23a0-4851-8cc4-a30bc91fdf3a" (UID: "66ccf938-23a0-4851-8cc4-a30bc91fdf3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.596099 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-kube-api-access-bkp6w" (OuterVolumeSpecName: "kube-api-access-bkp6w") pod "66ccf938-23a0-4851-8cc4-a30bc91fdf3a" (UID: "66ccf938-23a0-4851-8cc4-a30bc91fdf3a"). InnerVolumeSpecName "kube-api-access-bkp6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.664984 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkp6w\" (UniqueName: \"kubernetes.io/projected/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-kube-api-access-bkp6w\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.665015 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ccf938-23a0-4851-8cc4-a30bc91fdf3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.772130 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.868652 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vz8v\" (UniqueName: \"kubernetes.io/projected/9f62387d-4adf-4685-a9ce-dbc93745f149-kube-api-access-8vz8v\") pod \"9f62387d-4adf-4685-a9ce-dbc93745f149\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.868905 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-dns-svc\") pod \"9f62387d-4adf-4685-a9ce-dbc93745f149\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.869177 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-ovsdbserver-nb\") pod \"9f62387d-4adf-4685-a9ce-dbc93745f149\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.869272 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-config\") pod \"9f62387d-4adf-4685-a9ce-dbc93745f149\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.878381 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f62387d-4adf-4685-a9ce-dbc93745f149-kube-api-access-8vz8v" (OuterVolumeSpecName: "kube-api-access-8vz8v") pod "9f62387d-4adf-4685-a9ce-dbc93745f149" (UID: "9f62387d-4adf-4685-a9ce-dbc93745f149"). InnerVolumeSpecName "kube-api-access-8vz8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.970753 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f62387d-4adf-4685-a9ce-dbc93745f149" (UID: "9f62387d-4adf-4685-a9ce-dbc93745f149"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.973068 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-dns-svc\") pod \"9f62387d-4adf-4685-a9ce-dbc93745f149\" (UID: \"9f62387d-4adf-4685-a9ce-dbc93745f149\") " Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.980218 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 19 17:06:02 crc kubenswrapper[4792]: W0319 17:06:02.981136 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9f62387d-4adf-4685-a9ce-dbc93745f149/volumes/kubernetes.io~configmap/dns-svc Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.981159 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f62387d-4adf-4685-a9ce-dbc93745f149" (UID: "9f62387d-4adf-4685-a9ce-dbc93745f149"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.982956 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vz8v\" (UniqueName: \"kubernetes.io/projected/9f62387d-4adf-4685-a9ce-dbc93745f149-kube-api-access-8vz8v\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.983066 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.995461 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-config" (OuterVolumeSpecName: "config") pod "9f62387d-4adf-4685-a9ce-dbc93745f149" (UID: "9f62387d-4adf-4685-a9ce-dbc93745f149"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:02 crc kubenswrapper[4792]: I0319 17:06:02.998727 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f62387d-4adf-4685-a9ce-dbc93745f149" (UID: "9f62387d-4adf-4685-a9ce-dbc93745f149"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.065437 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" event={"ID":"34502e69-2af6-4cbd-8854-753cd654fc49","Type":"ContainerDied","Data":"3edf67fc5655eede3c51d9401d4f33deb3b505708a46ce998f9de62dda2add2b"} Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.065482 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3edf67fc5655eede3c51d9401d4f33deb3b505708a46ce998f9de62dda2add2b" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.065553 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cbpz5" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.075547 4792 generic.go:334] "Generic (PLEG): container finished" podID="36cfc4db-45c9-4d06-9244-9fa9dc88b94b" containerID="48cd40e5cca64520aae05d0984acd1be08bb083b1211e8782cd56040bdac4f23" exitCode=0 Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.075771 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aede-account-create-update-4zdv8" event={"ID":"36cfc4db-45c9-4d06-9244-9fa9dc88b94b","Type":"ContainerDied","Data":"48cd40e5cca64520aae05d0984acd1be08bb083b1211e8782cd56040bdac4f23"} Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.081962 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eeeb-account-create-update-jr92l" event={"ID":"66ccf938-23a0-4851-8cc4-a30bc91fdf3a","Type":"ContainerDied","Data":"fa19cd2430aac9fc22533e5d7811fc13ef2520ced13be2303f6e87c7101015a5"} Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.082005 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa19cd2430aac9fc22533e5d7811fc13ef2520ced13be2303f6e87c7101015a5" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.082084 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eeeb-account-create-update-jr92l" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.085035 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.085174 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f62387d-4adf-4685-a9ce-dbc93745f149-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.088217 4792 generic.go:334] "Generic (PLEG): container finished" podID="9f62387d-4adf-4685-a9ce-dbc93745f149" containerID="30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0" exitCode=0 Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.088393 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" event={"ID":"9f62387d-4adf-4685-a9ce-dbc93745f149","Type":"ContainerDied","Data":"30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0"} Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.088623 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" event={"ID":"9f62387d-4adf-4685-a9ce-dbc93745f149","Type":"ContainerDied","Data":"275c20b9e7dc57badcc3bb301d7369dcc6a9dcc5742cc9b541d9834ff7560a2d"} Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.088695 4792 scope.go:117] "RemoveContainer" containerID="30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.088557 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5ps76" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.115592 4792 generic.go:334] "Generic (PLEG): container finished" podID="946d010a-c057-4057-aa6a-87e4d739df92" containerID="364d68280989e43a835519963ba37ac7a205fef71c3d808b9ad18eaddbaf008f" exitCode=0 Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.115689 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sth98" event={"ID":"946d010a-c057-4057-aa6a-87e4d739df92","Type":"ContainerDied","Data":"364d68280989e43a835519963ba37ac7a205fef71c3d808b9ad18eaddbaf008f"} Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.145125 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565666-mmx49" event={"ID":"175b0c5f-9753-4154-9086-e39e498077e5","Type":"ContainerStarted","Data":"d1a1db8e6826e59d82ef0cc77e8f469540c24b3f41dd208662798e720a074bae"} Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.146986 4792 scope.go:117] "RemoveContainer" containerID="93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.152640 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"5092b3896b32f98751422d9370d067ca952c1d57c10ce85fee81624d08d1b526"} Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.153011 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-h8f7f" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.205333 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5ps76"] Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.217827 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5ps76"] Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.246066 4792 scope.go:117] "RemoveContainer" containerID="30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0" Mar 19 17:06:03 crc kubenswrapper[4792]: E0319 17:06:03.257007 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0\": container with ID starting with 30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0 not found: ID does not exist" containerID="30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.257253 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0"} err="failed to get container status \"30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0\": rpc error: code = NotFound desc = could not find container \"30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0\": container with ID starting with 30dc9fcde5c709edd85312467ada8224705b7fc51c50a1078d283299f8dfc5f0 not found: ID does not exist" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.257366 4792 scope.go:117] "RemoveContainer" containerID="93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392" Mar 19 17:06:03 crc kubenswrapper[4792]: E0319 17:06:03.261022 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392\": container with ID starting with 93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392 not found: ID does not exist" containerID="93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.261307 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392"} err="failed to get container status \"93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392\": rpc error: code = NotFound desc = could not find container \"93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392\": container with ID starting with 93a5b421bc779a8fb167364835d7f09b8f8bd99e4834363bbbcb079d3e30f392 not found: ID does not exist" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.314936 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9q76v"] Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.336824 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jg2vl"] Mar 19 17:06:03 crc kubenswrapper[4792]: W0319 17:06:03.343585 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d5868a6_fe98_44f9_908f_a5c9335098b1.slice/crio-a2d176b24544bcddeb112ad0d855aa19ebbe662b5e453f4f3e0c8a95d464e784 WatchSource:0}: Error finding container a2d176b24544bcddeb112ad0d855aa19ebbe662b5e453f4f3e0c8a95d464e784: Status 404 returned error can't find the container with id a2d176b24544bcddeb112ad0d855aa19ebbe662b5e453f4f3e0c8a95d464e784 Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.763362 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879f2f03-2c21-4cd7-9a25-f5e13cb028e6" path="/var/lib/kubelet/pods/879f2f03-2c21-4cd7-9a25-f5e13cb028e6/volumes" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.764367 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f62387d-4adf-4685-a9ce-dbc93745f149" path="/var/lib/kubelet/pods/9f62387d-4adf-4685-a9ce-dbc93745f149/volumes" Mar 19 17:06:03 crc kubenswrapper[4792]: I0319 17:06:03.900914 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sth98" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.026290 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hfjg\" (UniqueName: \"kubernetes.io/projected/946d010a-c057-4057-aa6a-87e4d739df92-kube-api-access-8hfjg\") pod \"946d010a-c057-4057-aa6a-87e4d739df92\" (UID: \"946d010a-c057-4057-aa6a-87e4d739df92\") " Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.026658 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d010a-c057-4057-aa6a-87e4d739df92-operator-scripts\") pod \"946d010a-c057-4057-aa6a-87e4d739df92\" (UID: \"946d010a-c057-4057-aa6a-87e4d739df92\") " Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.027778 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946d010a-c057-4057-aa6a-87e4d739df92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "946d010a-c057-4057-aa6a-87e4d739df92" (UID: "946d010a-c057-4057-aa6a-87e4d739df92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.038227 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946d010a-c057-4057-aa6a-87e4d739df92-kube-api-access-8hfjg" (OuterVolumeSpecName: "kube-api-access-8hfjg") pod "946d010a-c057-4057-aa6a-87e4d739df92" (UID: "946d010a-c057-4057-aa6a-87e4d739df92"). InnerVolumeSpecName "kube-api-access-8hfjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.130401 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hfjg\" (UniqueName: \"kubernetes.io/projected/946d010a-c057-4057-aa6a-87e4d739df92-kube-api-access-8hfjg\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.130433 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/946d010a-c057-4057-aa6a-87e4d739df92-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.175603 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sth98" event={"ID":"946d010a-c057-4057-aa6a-87e4d739df92","Type":"ContainerDied","Data":"bbc0a03ca82ea58c6a1064bf4982e78ad9b358d1d4e909832fe884776b6b044b"} Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.175649 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbc0a03ca82ea58c6a1064bf4982e78ad9b358d1d4e909832fe884776b6b044b" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.175716 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sth98" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.184116 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565666-mmx49" event={"ID":"175b0c5f-9753-4154-9086-e39e498077e5","Type":"ContainerStarted","Data":"7456c8a80c0df7e4e6a7dddbd799e9dec092585ecf3b85abc501acdfcec02938"} Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.195524 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerStarted","Data":"f3e965d403b1e60d7ff7f9cb0bae9dc1614602883d7752eabddea845d36e5baa"} Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.197808 4792 generic.go:334] "Generic (PLEG): container finished" podID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerID="20d17599e7baf08aa80dd68cd5f4e581b1c695b476e66383f43e5b9e2e64547b" exitCode=0 Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.198104 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg2vl" event={"ID":"5d5868a6-fe98-44f9-908f-a5c9335098b1","Type":"ContainerDied","Data":"20d17599e7baf08aa80dd68cd5f4e581b1c695b476e66383f43e5b9e2e64547b"} Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.198151 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg2vl" event={"ID":"5d5868a6-fe98-44f9-908f-a5c9335098b1","Type":"ContainerStarted","Data":"a2d176b24544bcddeb112ad0d855aa19ebbe662b5e453f4f3e0c8a95d464e784"} Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.204824 4792 generic.go:334] "Generic (PLEG): container finished" podID="b8a1f309-b03a-483d-8df7-7539bba505a1" containerID="6a45785ead2d5da13929da215c94e3ad1d6e7f2d014ad90b6df5e5b47c9adf29" exitCode=0 Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.204937 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9q76v" event={"ID":"b8a1f309-b03a-483d-8df7-7539bba505a1","Type":"ContainerDied","Data":"6a45785ead2d5da13929da215c94e3ad1d6e7f2d014ad90b6df5e5b47c9adf29"} Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.205314 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9q76v" event={"ID":"b8a1f309-b03a-483d-8df7-7539bba505a1","Type":"ContainerStarted","Data":"6b837e2b3e8199919da187d42259ae5136b92fbe4b009972341dc8fb0f820a1b"} Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.212245 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565666-mmx49" podStartSLOduration=2.90766304 podStartE2EDuration="4.212222973s" podCreationTimestamp="2026-03-19 17:06:00 +0000 UTC" firstStartedPulling="2026-03-19 17:06:02.379475472 +0000 UTC m=+1525.525533012" lastFinishedPulling="2026-03-19 17:06:03.684035405 +0000 UTC m=+1526.830092945" observedRunningTime="2026-03-19 17:06:04.203228567 +0000 UTC m=+1527.349286117" watchObservedRunningTime="2026-03-19 17:06:04.212222973 +0000 UTC m=+1527.358280513" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.292216 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.461441087 podStartE2EDuration="1m17.292191639s" podCreationTimestamp="2026-03-19 17:04:47 +0000 UTC" firstStartedPulling="2026-03-19 17:05:08.950047689 +0000 UTC m=+1472.096105229" lastFinishedPulling="2026-03-19 17:06:03.780798241 +0000 UTC m=+1526.926855781" observedRunningTime="2026-03-19 17:06:04.273365122 +0000 UTC m=+1527.419422672" watchObservedRunningTime="2026-03-19 17:06:04.292191639 +0000 UTC m=+1527.438249179" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.785142 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.852148 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sztj\" (UniqueName: \"kubernetes.io/projected/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-kube-api-access-4sztj\") pod \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\" (UID: \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\") " Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.852226 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-operator-scripts\") pod \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\" (UID: \"36cfc4db-45c9-4d06-9244-9fa9dc88b94b\") " Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.853889 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36cfc4db-45c9-4d06-9244-9fa9dc88b94b" (UID: "36cfc4db-45c9-4d06-9244-9fa9dc88b94b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.875723 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-kube-api-access-4sztj" (OuterVolumeSpecName: "kube-api-access-4sztj") pod "36cfc4db-45c9-4d06-9244-9fa9dc88b94b" (UID: "36cfc4db-45c9-4d06-9244-9fa9dc88b94b"). InnerVolumeSpecName "kube-api-access-4sztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.955369 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sztj\" (UniqueName: \"kubernetes.io/projected/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-kube-api-access-4sztj\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:04 crc kubenswrapper[4792]: I0319 17:06:04.955401 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36cfc4db-45c9-4d06-9244-9fa9dc88b94b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.226880 4792 generic.go:334] "Generic (PLEG): container finished" podID="175b0c5f-9753-4154-9086-e39e498077e5" containerID="7456c8a80c0df7e4e6a7dddbd799e9dec092585ecf3b85abc501acdfcec02938" exitCode=0 Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.226971 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565666-mmx49" event={"ID":"175b0c5f-9753-4154-9086-e39e498077e5","Type":"ContainerDied","Data":"7456c8a80c0df7e4e6a7dddbd799e9dec092585ecf3b85abc501acdfcec02938"} Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.229047 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aede-account-create-update-4zdv8" event={"ID":"36cfc4db-45c9-4d06-9244-9fa9dc88b94b","Type":"ContainerDied","Data":"568d3397db5d8cf365ccc1ab425f98e78634357616a7993f8d1ac94e3480b5aa"} Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.229088 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aede-account-create-update-4zdv8" Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.229095 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="568d3397db5d8cf365ccc1ab425f98e78634357616a7993f8d1ac94e3480b5aa" Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.707407 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.771272 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjnm7\" (UniqueName: \"kubernetes.io/projected/b8a1f309-b03a-483d-8df7-7539bba505a1-kube-api-access-zjnm7\") pod \"b8a1f309-b03a-483d-8df7-7539bba505a1\" (UID: \"b8a1f309-b03a-483d-8df7-7539bba505a1\") " Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.771449 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a1f309-b03a-483d-8df7-7539bba505a1-operator-scripts\") pod \"b8a1f309-b03a-483d-8df7-7539bba505a1\" (UID: \"b8a1f309-b03a-483d-8df7-7539bba505a1\") " Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.773513 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a1f309-b03a-483d-8df7-7539bba505a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8a1f309-b03a-483d-8df7-7539bba505a1" (UID: "b8a1f309-b03a-483d-8df7-7539bba505a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.777497 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a1f309-b03a-483d-8df7-7539bba505a1-kube-api-access-zjnm7" (OuterVolumeSpecName: "kube-api-access-zjnm7") pod "b8a1f309-b03a-483d-8df7-7539bba505a1" (UID: "b8a1f309-b03a-483d-8df7-7539bba505a1"). InnerVolumeSpecName "kube-api-access-zjnm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.874595 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjnm7\" (UniqueName: \"kubernetes.io/projected/b8a1f309-b03a-483d-8df7-7539bba505a1-kube-api-access-zjnm7\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:05 crc kubenswrapper[4792]: I0319 17:06:05.874633 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a1f309-b03a-483d-8df7-7539bba505a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.239882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"3de0b1d145dcb81f24d3c6b2d6b5d0a80ebc320acc943d9e3dffbe719463ab8f"} Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.239930 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"b45adfd6cb82714f3cd634ce7a5dd9811dd0167060d28b4e2a127a9600fb48d9"} Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.239941 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"96fa8529eb578a7db348614bd22933ec8d91c9f1fea09e008610ad018010c635"} Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.241555 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg2vl" event={"ID":"5d5868a6-fe98-44f9-908f-a5c9335098b1","Type":"ContainerStarted","Data":"33bcfe0dab5aa8f138f612eec29c8813dc0213fa5daf141b367f29abd2af524c"} Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.243181 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9q76v" Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.243177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9q76v" event={"ID":"b8a1f309-b03a-483d-8df7-7539bba505a1","Type":"ContainerDied","Data":"6b837e2b3e8199919da187d42259ae5136b92fbe4b009972341dc8fb0f820a1b"} Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.243230 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b837e2b3e8199919da187d42259ae5136b92fbe4b009972341dc8fb0f820a1b" Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.765052 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565666-mmx49" Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.894290 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlljw\" (UniqueName: \"kubernetes.io/projected/175b0c5f-9753-4154-9086-e39e498077e5-kube-api-access-jlljw\") pod \"175b0c5f-9753-4154-9086-e39e498077e5\" (UID: \"175b0c5f-9753-4154-9086-e39e498077e5\") " Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.901469 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175b0c5f-9753-4154-9086-e39e498077e5-kube-api-access-jlljw" (OuterVolumeSpecName: "kube-api-access-jlljw") pod "175b0c5f-9753-4154-9086-e39e498077e5" (UID: "175b0c5f-9753-4154-9086-e39e498077e5"). InnerVolumeSpecName "kube-api-access-jlljw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:06 crc kubenswrapper[4792]: I0319 17:06:06.997950 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlljw\" (UniqueName: \"kubernetes.io/projected/175b0c5f-9753-4154-9086-e39e498077e5-kube-api-access-jlljw\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.256602 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565666-mmx49" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.256605 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565666-mmx49" event={"ID":"175b0c5f-9753-4154-9086-e39e498077e5","Type":"ContainerDied","Data":"d1a1db8e6826e59d82ef0cc77e8f469540c24b3f41dd208662798e720a074bae"} Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.256743 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a1db8e6826e59d82ef0cc77e8f469540c24b3f41dd208662798e720a074bae" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.273814 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"19c98141fa6c4b5207f89a894c390c3824f2fe76aab88d3ab6a9f8e656cb91d3"} Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.297496 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565660-q64d4"] Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.312421 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565660-q64d4"] Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.564537 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b"] Mar 19 17:06:07 crc kubenswrapper[4792]: E0319 17:06:07.565018 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cfc4db-45c9-4d06-9244-9fa9dc88b94b" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565034 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cfc4db-45c9-4d06-9244-9fa9dc88b94b" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: E0319 17:06:07.565059 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a1f309-b03a-483d-8df7-7539bba505a1" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565066 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a1f309-b03a-483d-8df7-7539bba505a1" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: E0319 17:06:07.565076 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f62387d-4adf-4685-a9ce-dbc93745f149" containerName="dnsmasq-dns" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565082 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f62387d-4adf-4685-a9ce-dbc93745f149" containerName="dnsmasq-dns" Mar 19 17:06:07 crc kubenswrapper[4792]: E0319 17:06:07.565093 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f62387d-4adf-4685-a9ce-dbc93745f149" containerName="init" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565098 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f62387d-4adf-4685-a9ce-dbc93745f149" containerName="init" Mar 19 17:06:07 crc kubenswrapper[4792]: E0319 17:06:07.565113 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175b0c5f-9753-4154-9086-e39e498077e5" containerName="oc" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565118 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="175b0c5f-9753-4154-9086-e39e498077e5" containerName="oc" Mar 19 17:06:07 crc kubenswrapper[4792]: E0319 17:06:07.565140 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ccf938-23a0-4851-8cc4-a30bc91fdf3a" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565153 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ccf938-23a0-4851-8cc4-a30bc91fdf3a" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: E0319 17:06:07.565164 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946d010a-c057-4057-aa6a-87e4d739df92" containerName="mariadb-database-create" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565169 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="946d010a-c057-4057-aa6a-87e4d739df92" containerName="mariadb-database-create" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565375 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ccf938-23a0-4851-8cc4-a30bc91fdf3a" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565390 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cfc4db-45c9-4d06-9244-9fa9dc88b94b" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565400 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f62387d-4adf-4685-a9ce-dbc93745f149" containerName="dnsmasq-dns" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565410 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a1f309-b03a-483d-8df7-7539bba505a1" containerName="mariadb-account-create-update" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565426 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="175b0c5f-9753-4154-9086-e39e498077e5" containerName="oc" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.565436 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="946d010a-c057-4057-aa6a-87e4d739df92" containerName="mariadb-database-create" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.566193 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.580402 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b"] Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.616520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbsk\" (UniqueName: \"kubernetes.io/projected/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-kube-api-access-sbbsk\") pod \"mysqld-exporter-openstack-cell1-db-create-dmv9b\" (UID: \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.616678 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-dmv9b\" (UID: \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.725239 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-dmv9b\" (UID: \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.725655 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbsk\" (UniqueName: \"kubernetes.io/projected/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-kube-api-access-sbbsk\") pod \"mysqld-exporter-openstack-cell1-db-create-dmv9b\" (UID: \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.726342 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-dmv9b\" (UID: \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.755017 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbsk\" (UniqueName: \"kubernetes.io/projected/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-kube-api-access-sbbsk\") pod \"mysqld-exporter-openstack-cell1-db-create-dmv9b\" (UID: \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.786924 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f927d1-51a8-41d6-a503-1967b4fd9561" path="/var/lib/kubelet/pods/86f927d1-51a8-41d6-a503-1967b4fd9561/volumes" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.821908 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-8ac8-account-create-update-l6zkp"] Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.823306 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.827118 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.844922 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8ac8-account-create-update-l6zkp"] Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.885926 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.928549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76877385-6964-4f62-a8e7-9d73a772c630-operator-scripts\") pod \"mysqld-exporter-8ac8-account-create-update-l6zkp\" (UID: \"76877385-6964-4f62-a8e7-9d73a772c630\") " pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:07 crc kubenswrapper[4792]: I0319 17:06:07.928750 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8wnr\" (UniqueName: \"kubernetes.io/projected/76877385-6964-4f62-a8e7-9d73a772c630-kube-api-access-l8wnr\") pod \"mysqld-exporter-8ac8-account-create-update-l6zkp\" (UID: \"76877385-6964-4f62-a8e7-9d73a772c630\") " pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.030952 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8wnr\" (UniqueName: \"kubernetes.io/projected/76877385-6964-4f62-a8e7-9d73a772c630-kube-api-access-l8wnr\") pod \"mysqld-exporter-8ac8-account-create-update-l6zkp\" (UID: \"76877385-6964-4f62-a8e7-9d73a772c630\") " pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.031215 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76877385-6964-4f62-a8e7-9d73a772c630-operator-scripts\") pod \"mysqld-exporter-8ac8-account-create-update-l6zkp\" (UID: \"76877385-6964-4f62-a8e7-9d73a772c630\") " pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.031993 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76877385-6964-4f62-a8e7-9d73a772c630-operator-scripts\") pod \"mysqld-exporter-8ac8-account-create-update-l6zkp\" (UID: \"76877385-6964-4f62-a8e7-9d73a772c630\") " pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.050129 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8wnr\" (UniqueName: \"kubernetes.io/projected/76877385-6964-4f62-a8e7-9d73a772c630-kube-api-access-l8wnr\") pod \"mysqld-exporter-8ac8-account-create-update-l6zkp\" (UID: \"76877385-6964-4f62-a8e7-9d73a772c630\") " pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.196783 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.291940 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"38ab19a05360f27ab0eb021db5554a8ed22e68f976ae3c244575b84ae2aff5e6"} Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.291989 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"0b4defc6e63d1cf163f7fb373775009390947da685982e4b4aaa76ebf47c3fb1"} Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.392982 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b"] Mar 19 17:06:08 crc kubenswrapper[4792]: W0319 17:06:08.394208 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62179f52_7a2c_4ca8_91e3_9fd241d9b1e6.slice/crio-804beff67ee55a4dda472e66d18106f35ba43e4e1f3fb9234261df8a7115f1c9 WatchSource:0}: Error finding container 804beff67ee55a4dda472e66d18106f35ba43e4e1f3fb9234261df8a7115f1c9: Status 404 returned error can't find the container with id 804beff67ee55a4dda472e66d18106f35ba43e4e1f3fb9234261df8a7115f1c9 Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.719642 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8ac8-account-create-update-l6zkp"] Mar 19 17:06:08 crc kubenswrapper[4792]: I0319 17:06:08.855401 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.303503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" event={"ID":"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6","Type":"ContainerStarted","Data":"93a0547664e0a71ac8735e5c9a5c44d897bb4f5e04c7fa9ef6cc50f0e2d57f22"} Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.303579 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" event={"ID":"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6","Type":"ContainerStarted","Data":"804beff67ee55a4dda472e66d18106f35ba43e4e1f3fb9234261df8a7115f1c9"} Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.305299 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" event={"ID":"76877385-6964-4f62-a8e7-9d73a772c630","Type":"ContainerStarted","Data":"6ded6d9f98da1353d7adfa80be6fd2d1df2064d41ccc90542bb500a56c2b4d0f"} Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.305340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" event={"ID":"76877385-6964-4f62-a8e7-9d73a772c630","Type":"ContainerStarted","Data":"366829a02ef3b945ceb8ea93fd415fb61b7f5767cdf20625ca1ab16bb7ed4fc6"} Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.310914 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"e641b6a8e33ca46ba5287ff4e34e55b2549288084bc4fe0d9a12673798aef099"} Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.310965 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"87226a5a8801716fa2435e2299e593d4b902fd46ec3e8f97bd933f19119d991a"} Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.321708 4792 generic.go:334] "Generic (PLEG): container finished" podID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerID="33bcfe0dab5aa8f138f612eec29c8813dc0213fa5daf141b367f29abd2af524c" exitCode=0 Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.321797 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg2vl" event={"ID":"5d5868a6-fe98-44f9-908f-a5c9335098b1","Type":"ContainerDied","Data":"33bcfe0dab5aa8f138f612eec29c8813dc0213fa5daf141b367f29abd2af524c"} Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.331283 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" podStartSLOduration=2.331265621 podStartE2EDuration="2.331265621s" podCreationTimestamp="2026-03-19 17:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:09.328172217 +0000 UTC m=+1532.474229747" watchObservedRunningTime="2026-03-19 17:06:09.331265621 +0000 UTC m=+1532.477323161" Mar 19 17:06:09 crc kubenswrapper[4792]: I0319 17:06:09.369567 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" podStartSLOduration=2.369549353 podStartE2EDuration="2.369549353s" podCreationTimestamp="2026-03-19 17:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:09.366790877 +0000 UTC m=+1532.512848417" watchObservedRunningTime="2026-03-19 17:06:09.369549353 +0000 UTC m=+1532.515606903" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.269782 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xbdtj"] Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.271812 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.274024 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bcbrs" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.276679 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.283082 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xbdtj"] Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.285041 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-config-data\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.285107 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv6v8\" (UniqueName: \"kubernetes.io/projected/193d3d1f-e773-4b86-a176-ddb5c7727e39-kube-api-access-pv6v8\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.285151 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-db-sync-config-data\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.285220 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-combined-ca-bundle\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.334527 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg2vl" event={"ID":"5d5868a6-fe98-44f9-908f-a5c9335098b1","Type":"ContainerStarted","Data":"f6689d53d79b58bd5d06a5116d45e9cb0e04b0b879c5ba5af5408de1802ca920"} Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.336714 4792 generic.go:334] "Generic (PLEG): container finished" podID="62179f52-7a2c-4ca8-91e3-9fd241d9b1e6" containerID="93a0547664e0a71ac8735e5c9a5c44d897bb4f5e04c7fa9ef6cc50f0e2d57f22" exitCode=0 Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.336827 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" event={"ID":"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6","Type":"ContainerDied","Data":"93a0547664e0a71ac8735e5c9a5c44d897bb4f5e04c7fa9ef6cc50f0e2d57f22"} Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.340151 4792 generic.go:334] "Generic (PLEG): container finished" podID="76877385-6964-4f62-a8e7-9d73a772c630" containerID="6ded6d9f98da1353d7adfa80be6fd2d1df2064d41ccc90542bb500a56c2b4d0f" exitCode=0 Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.340203 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" event={"ID":"76877385-6964-4f62-a8e7-9d73a772c630","Type":"ContainerDied","Data":"6ded6d9f98da1353d7adfa80be6fd2d1df2064d41ccc90542bb500a56c2b4d0f"} Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.387084 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-combined-ca-bundle\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.387204 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-config-data\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.387255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv6v8\" (UniqueName: \"kubernetes.io/projected/193d3d1f-e773-4b86-a176-ddb5c7727e39-kube-api-access-pv6v8\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.387295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-db-sync-config-data\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.396877 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jg2vl" podStartSLOduration=3.772681209 podStartE2EDuration="9.396858133s" podCreationTimestamp="2026-03-19 17:06:01 +0000 UTC" firstStartedPulling="2026-03-19 17:06:04.199976098 +0000 UTC m=+1527.346033628" lastFinishedPulling="2026-03-19 17:06:09.824153012 +0000 UTC m=+1532.970210552" observedRunningTime="2026-03-19 17:06:10.376906626 +0000 UTC m=+1533.522964166" watchObservedRunningTime="2026-03-19 17:06:10.396858133 +0000 UTC m=+1533.542915663" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.399079 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-combined-ca-bundle\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.401980 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-config-data\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.404450 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-db-sync-config-data\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.404930 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv6v8\" (UniqueName: \"kubernetes.io/projected/193d3d1f-e773-4b86-a176-ddb5c7727e39-kube-api-access-pv6v8\") pod \"glance-db-sync-xbdtj\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:10 crc kubenswrapper[4792]: I0319 17:06:10.649243 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:11 crc kubenswrapper[4792]: I0319 17:06:11.395129 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xbdtj"] Mar 19 17:06:11 crc kubenswrapper[4792]: I0319 17:06:11.672673 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ae950307-1857-4a46-ab98-55843387f128" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Mar 19 17:06:11 crc kubenswrapper[4792]: I0319 17:06:11.736243 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 19 17:06:11 crc kubenswrapper[4792]: I0319 17:06:11.774816 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.250076 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.373338 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" event={"ID":"76877385-6964-4f62-a8e7-9d73a772c630","Type":"ContainerDied","Data":"366829a02ef3b945ceb8ea93fd415fb61b7f5767cdf20625ca1ab16bb7ed4fc6"} Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.373692 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="366829a02ef3b945ceb8ea93fd415fb61b7f5767cdf20625ca1ab16bb7ed4fc6" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.376360 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xbdtj" event={"ID":"193d3d1f-e773-4b86-a176-ddb5c7727e39","Type":"ContainerStarted","Data":"dc71cf71a1fde23c8d04d544ef23b3ba51789f9643b8142128aea99aded05ead"} Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.377561 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" event={"ID":"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6","Type":"ContainerDied","Data":"804beff67ee55a4dda472e66d18106f35ba43e4e1f3fb9234261df8a7115f1c9"} Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.377582 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="804beff67ee55a4dda472e66d18106f35ba43e4e1f3fb9234261df8a7115f1c9" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.424059 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.429966 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.484324 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.484364 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.545940 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-operator-scripts\") pod \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\" (UID: \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\") " Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.545995 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76877385-6964-4f62-a8e7-9d73a772c630-operator-scripts\") pod \"76877385-6964-4f62-a8e7-9d73a772c630\" (UID: \"76877385-6964-4f62-a8e7-9d73a772c630\") " Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.546058 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8wnr\" (UniqueName: \"kubernetes.io/projected/76877385-6964-4f62-a8e7-9d73a772c630-kube-api-access-l8wnr\") pod \"76877385-6964-4f62-a8e7-9d73a772c630\" (UID: \"76877385-6964-4f62-a8e7-9d73a772c630\") " Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.546112 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbbsk\" (UniqueName: \"kubernetes.io/projected/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-kube-api-access-sbbsk\") pod \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\" (UID: \"62179f52-7a2c-4ca8-91e3-9fd241d9b1e6\") " Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.547384 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76877385-6964-4f62-a8e7-9d73a772c630-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76877385-6964-4f62-a8e7-9d73a772c630" (UID: "76877385-6964-4f62-a8e7-9d73a772c630"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.547810 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62179f52-7a2c-4ca8-91e3-9fd241d9b1e6" (UID: "62179f52-7a2c-4ca8-91e3-9fd241d9b1e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.551073 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-kube-api-access-sbbsk" (OuterVolumeSpecName: "kube-api-access-sbbsk") pod "62179f52-7a2c-4ca8-91e3-9fd241d9b1e6" (UID: "62179f52-7a2c-4ca8-91e3-9fd241d9b1e6"). InnerVolumeSpecName "kube-api-access-sbbsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.555006 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76877385-6964-4f62-a8e7-9d73a772c630-kube-api-access-l8wnr" (OuterVolumeSpecName: "kube-api-access-l8wnr") pod "76877385-6964-4f62-a8e7-9d73a772c630" (UID: "76877385-6964-4f62-a8e7-9d73a772c630"). InnerVolumeSpecName "kube-api-access-l8wnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.649220 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.649558 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76877385-6964-4f62-a8e7-9d73a772c630-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.649572 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8wnr\" (UniqueName: \"kubernetes.io/projected/76877385-6964-4f62-a8e7-9d73a772c630-kube-api-access-l8wnr\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:12 crc kubenswrapper[4792]: I0319 17:06:12.649586 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbbsk\" (UniqueName: \"kubernetes.io/projected/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6-kube-api-access-sbbsk\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:13 crc kubenswrapper[4792]: I0319 17:06:13.395661 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8ac8-account-create-update-l6zkp" Mar 19 17:06:13 crc kubenswrapper[4792]: I0319 17:06:13.397882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"a89ce23f0f4bef1dc20d23bd7ecad2407278981f8ee282fac16482a424903018"} Mar 19 17:06:13 crc kubenswrapper[4792]: I0319 17:06:13.397939 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"d6dc66ce3a40900a9fc82dab3e7d253fddb05ab7e358aa1020493ef0d84a9d5a"} Mar 19 17:06:13 crc kubenswrapper[4792]: I0319 17:06:13.397948 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"052d4f61734111a8e36f2d9fb6ee6ab6fe7580bbc40243fb424fde52f56e836d"} Mar 19 17:06:13 crc kubenswrapper[4792]: I0319 17:06:13.397959 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"7789172ebac285c8ef5242b1516088fd2d927d1fe8367241c53c3402e6d9abc7"} Mar 19 17:06:13 crc kubenswrapper[4792]: I0319 17:06:13.398156 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b" Mar 19 17:06:13 crc kubenswrapper[4792]: I0319 17:06:13.557060 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jg2vl" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" probeResult="failure" output=< Mar 19 17:06:13 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:06:13 crc kubenswrapper[4792]: > Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.420476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"51fc98c09572b695baf7826b5aa229c45efc0daf0ac9a67a5546d15524e0ef44"} Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.420760 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"4ff932cb90e1f558fb4ac206272e948fab93949b05155f1cb358ca727f5f13d4"} Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.420770 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"797388ae-9d68-43cc-9e1b-063da11e1a5a","Type":"ContainerStarted","Data":"754275abf872a3f7217c2ae2b9126d082fd170e82f2359aa0626f2d740ba9c53"} Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.474527 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.198963423 podStartE2EDuration="47.474503402s" podCreationTimestamp="2026-03-19 17:05:27 +0000 UTC" firstStartedPulling="2026-03-19 17:06:02.99025628 +0000 UTC m=+1526.136313820" lastFinishedPulling="2026-03-19 17:06:12.265796259 +0000 UTC m=+1535.411853799" observedRunningTime="2026-03-19 17:06:14.461489294 +0000 UTC m=+1537.607546834" watchObservedRunningTime="2026-03-19 17:06:14.474503402 +0000 UTC m=+1537.620560942" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.748374 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.753134 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-9hdvh"] Mar 19 17:06:14 crc kubenswrapper[4792]: E0319 17:06:14.753685 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76877385-6964-4f62-a8e7-9d73a772c630" containerName="mariadb-account-create-update" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.753706 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="76877385-6964-4f62-a8e7-9d73a772c630" containerName="mariadb-account-create-update" Mar 19 17:06:14 crc kubenswrapper[4792]: E0319 17:06:14.753731 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62179f52-7a2c-4ca8-91e3-9fd241d9b1e6" containerName="mariadb-database-create" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.753741 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="62179f52-7a2c-4ca8-91e3-9fd241d9b1e6" containerName="mariadb-database-create" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.754025 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="62179f52-7a2c-4ca8-91e3-9fd241d9b1e6" containerName="mariadb-database-create" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.754069 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="76877385-6964-4f62-a8e7-9d73a772c630" containerName="mariadb-account-create-update" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.757012 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.759125 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.767958 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-9hdvh"] Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.903308 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.903821 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.903998 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7vz\" (UniqueName: \"kubernetes.io/projected/76bc43a0-2615-470c-8719-f2081f6ce044-kube-api-access-9t7vz\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.904237 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.904318 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-config\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:14 crc kubenswrapper[4792]: I0319 17:06:14.904372 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.006472 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.006526 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7vz\" (UniqueName: \"kubernetes.io/projected/76bc43a0-2615-470c-8719-f2081f6ce044-kube-api-access-9t7vz\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.006578 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.006602 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-config\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.006628 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.006687 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.007520 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.007555 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.007669 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-config\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.008459 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.013596 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.037604 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7vz\" (UniqueName: \"kubernetes.io/projected/76bc43a0-2615-470c-8719-f2081f6ce044-kube-api-access-9t7vz\") pod \"dnsmasq-dns-6d5b6d6b67-9hdvh\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.090520 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:15 crc kubenswrapper[4792]: I0319 17:06:15.630366 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-9hdvh"] Mar 19 17:06:15 crc kubenswrapper[4792]: W0319 17:06:15.636782 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76bc43a0_2615_470c_8719_f2081f6ce044.slice/crio-c8ec5954dcc116d4f81c11fb3540632a7ab04e22f7bfd22ac7931a71ae422212 WatchSource:0}: Error finding container c8ec5954dcc116d4f81c11fb3540632a7ab04e22f7bfd22ac7931a71ae422212: Status 404 returned error can't find the container with id c8ec5954dcc116d4f81c11fb3540632a7ab04e22f7bfd22ac7931a71ae422212 Mar 19 17:06:16 crc kubenswrapper[4792]: I0319 17:06:16.452140 4792 generic.go:334] "Generic (PLEG): container finished" podID="76bc43a0-2615-470c-8719-f2081f6ce044" containerID="9c27ac9f96f277b848d308480571893f717237453ab72aeaf62e31b0f6783d66" exitCode=0 Mar 19 17:06:16 crc kubenswrapper[4792]: I0319 17:06:16.452196 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" event={"ID":"76bc43a0-2615-470c-8719-f2081f6ce044","Type":"ContainerDied","Data":"9c27ac9f96f277b848d308480571893f717237453ab72aeaf62e31b0f6783d66"} Mar 19 17:06:16 crc kubenswrapper[4792]: I0319 17:06:16.452495 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" event={"ID":"76bc43a0-2615-470c-8719-f2081f6ce044","Type":"ContainerStarted","Data":"c8ec5954dcc116d4f81c11fb3540632a7ab04e22f7bfd22ac7931a71ae422212"} Mar 19 17:06:17 crc kubenswrapper[4792]: I0319 17:06:17.462633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" event={"ID":"76bc43a0-2615-470c-8719-f2081f6ce044","Type":"ContainerStarted","Data":"60ddb890d154ff48a174160f5f3b5fcf80cf9064efc8264d8e878ee3eaf94c9f"} Mar 19 17:06:17 crc kubenswrapper[4792]: I0319 17:06:17.464129 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:17 crc kubenswrapper[4792]: I0319 17:06:17.487067 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" podStartSLOduration=3.4870475499999998 podStartE2EDuration="3.48704755s" podCreationTimestamp="2026-03-19 17:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:17.482316971 +0000 UTC m=+1540.628374511" watchObservedRunningTime="2026-03-19 17:06:17.48704755 +0000 UTC m=+1540.633105090" Mar 19 17:06:17 crc kubenswrapper[4792]: I0319 17:06:17.979449 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:06:17 crc kubenswrapper[4792]: I0319 17:06:17.993501 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 19 17:06:17 crc kubenswrapper[4792]: I0319 17:06:17.996918 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.004963 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.076964 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-config-data\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.077026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rlc\" (UniqueName: \"kubernetes.io/projected/77cb387a-c012-4955-a0a9-272badd02d11-kube-api-access-v8rlc\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.077303 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.193174 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.193719 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-config-data\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.193784 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rlc\" (UniqueName: \"kubernetes.io/projected/77cb387a-c012-4955-a0a9-272badd02d11-kube-api-access-v8rlc\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.202143 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.209965 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-config-data\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.211136 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rlc\" (UniqueName: \"kubernetes.io/projected/77cb387a-c012-4955-a0a9-272badd02d11-kube-api-access-v8rlc\") pod \"mysqld-exporter-0\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.310715 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.855921 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.858548 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:18 crc kubenswrapper[4792]: I0319 17:06:18.905934 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:06:18 crc kubenswrapper[4792]: W0319 17:06:18.910899 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77cb387a_c012_4955_a0a9_272badd02d11.slice/crio-0f0d64b7dfad4935d417f50b1e85b3588fcbc084c6c71f4fe286e835e4cf1046 WatchSource:0}: Error finding container 0f0d64b7dfad4935d417f50b1e85b3588fcbc084c6c71f4fe286e835e4cf1046: Status 404 returned error can't find the container with id 0f0d64b7dfad4935d417f50b1e85b3588fcbc084c6c71f4fe286e835e4cf1046 Mar 19 17:06:19 crc kubenswrapper[4792]: I0319 17:06:19.546257 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"77cb387a-c012-4955-a0a9-272badd02d11","Type":"ContainerStarted","Data":"0f0d64b7dfad4935d417f50b1e85b3588fcbc084c6c71f4fe286e835e4cf1046"} Mar 19 17:06:19 crc kubenswrapper[4792]: I0319 17:06:19.548133 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:20 crc kubenswrapper[4792]: I0319 17:06:20.230629 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:06:20 crc kubenswrapper[4792]: I0319 17:06:20.230677 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:06:21 crc kubenswrapper[4792]: I0319 17:06:21.672632 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 19 17:06:21 crc kubenswrapper[4792]: I0319 17:06:21.736995 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 19 17:06:21 crc kubenswrapper[4792]: I0319 17:06:21.769528 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.023670 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.025032 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="config-reloader" containerID="cri-o://cd0abb866fd23b8c463291a13648f842ac5d9e6a62f969963ee38ddf353f3bc1" gracePeriod=600 Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.025239 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="prometheus" containerID="cri-o://f3e965d403b1e60d7ff7f9cb0bae9dc1614602883d7752eabddea845d36e5baa" gracePeriod=600 Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.025317 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="thanos-sidecar" containerID="cri-o://776a0bacbac4ae1eac591d9c7210c05c54b0c823ef91c0bdd03109e84e6d2412" gracePeriod=600 Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.541315 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jg2vl" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" probeResult="failure" output=< Mar 19 17:06:23 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:06:23 crc kubenswrapper[4792]: > Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.596971 4792 generic.go:334] "Generic (PLEG): container finished" podID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerID="f3e965d403b1e60d7ff7f9cb0bae9dc1614602883d7752eabddea845d36e5baa" exitCode=0 Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.597003 4792 generic.go:334] "Generic (PLEG): container finished" podID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerID="776a0bacbac4ae1eac591d9c7210c05c54b0c823ef91c0bdd03109e84e6d2412" exitCode=0 Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.597010 4792 generic.go:334] "Generic (PLEG): container finished" podID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerID="cd0abb866fd23b8c463291a13648f842ac5d9e6a62f969963ee38ddf353f3bc1" exitCode=0 Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.597033 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerDied","Data":"f3e965d403b1e60d7ff7f9cb0bae9dc1614602883d7752eabddea845d36e5baa"} Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.597058 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerDied","Data":"776a0bacbac4ae1eac591d9c7210c05c54b0c823ef91c0bdd03109e84e6d2412"} Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.597076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerDied","Data":"cd0abb866fd23b8c463291a13648f842ac5d9e6a62f969963ee38ddf353f3bc1"} Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.811587 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-tjs8v"] Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.813039 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.834952 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tjs8v"] Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.856683 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.143:9090/-/ready\": dial tcp 10.217.0.143:9090: connect: connection refused" Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.932382 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-8475-account-create-update-j72dl"] Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.934422 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.939741 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.948723 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8475-account-create-update-j72dl"] Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.950087 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8eaf3e-d60d-4940-8fed-d307ef4afd12-operator-scripts\") pod \"heat-db-create-tjs8v\" (UID: \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\") " pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:23 crc kubenswrapper[4792]: I0319 17:06:23.950197 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49bh\" (UniqueName: \"kubernetes.io/projected/de8eaf3e-d60d-4940-8fed-d307ef4afd12-kube-api-access-z49bh\") pod \"heat-db-create-tjs8v\" (UID: \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\") " pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.014531 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-68wjz"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.015811 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.027917 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-68wjz"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.052132 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8eaf3e-d60d-4940-8fed-d307ef4afd12-operator-scripts\") pod \"heat-db-create-tjs8v\" (UID: \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\") " pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.052498 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z49bh\" (UniqueName: \"kubernetes.io/projected/de8eaf3e-d60d-4940-8fed-d307ef4afd12-kube-api-access-z49bh\") pod \"heat-db-create-tjs8v\" (UID: \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\") " pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.052533 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-operator-scripts\") pod \"heat-8475-account-create-update-j72dl\" (UID: \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\") " pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.052626 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbgm\" (UniqueName: \"kubernetes.io/projected/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-kube-api-access-2dbgm\") pod \"heat-8475-account-create-update-j72dl\" (UID: \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\") " pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.053349 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8eaf3e-d60d-4940-8fed-d307ef4afd12-operator-scripts\") pod \"heat-db-create-tjs8v\" (UID: \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\") " pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.087010 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z49bh\" (UniqueName: \"kubernetes.io/projected/de8eaf3e-d60d-4940-8fed-d307ef4afd12-kube-api-access-z49bh\") pod \"heat-db-create-tjs8v\" (UID: \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\") " pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.114702 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0be0-account-create-update-8mkbd"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.116943 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.119455 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.127080 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0be0-account-create-update-8mkbd"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.134279 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.155088 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbrk\" (UniqueName: \"kubernetes.io/projected/447122e9-4195-4d8b-992d-dc435c22fa07-kube-api-access-zzbrk\") pod \"cinder-db-create-68wjz\" (UID: \"447122e9-4195-4d8b-992d-dc435c22fa07\") " pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.155197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-operator-scripts\") pod \"heat-8475-account-create-update-j72dl\" (UID: \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\") " pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.155371 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/447122e9-4195-4d8b-992d-dc435c22fa07-operator-scripts\") pod \"cinder-db-create-68wjz\" (UID: \"447122e9-4195-4d8b-992d-dc435c22fa07\") " pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.155447 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dbgm\" (UniqueName: \"kubernetes.io/projected/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-kube-api-access-2dbgm\") pod \"heat-8475-account-create-update-j72dl\" (UID: \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\") " pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.157236 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-operator-scripts\") pod \"heat-8475-account-create-update-j72dl\" (UID: \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\") " pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.187510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dbgm\" (UniqueName: \"kubernetes.io/projected/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-kube-api-access-2dbgm\") pod \"heat-8475-account-create-update-j72dl\" (UID: \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\") " pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.205543 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4jvsw"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.206772 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.219113 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4jvsw"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.254116 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.259628 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27gvf\" (UniqueName: \"kubernetes.io/projected/606a03e6-0ad3-4369-9921-f68f56b278f4-kube-api-access-27gvf\") pod \"cinder-0be0-account-create-update-8mkbd\" (UID: \"606a03e6-0ad3-4369-9921-f68f56b278f4\") " pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.259686 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/447122e9-4195-4d8b-992d-dc435c22fa07-operator-scripts\") pod \"cinder-db-create-68wjz\" (UID: \"447122e9-4195-4d8b-992d-dc435c22fa07\") " pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.259969 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/606a03e6-0ad3-4369-9921-f68f56b278f4-operator-scripts\") pod \"cinder-0be0-account-create-update-8mkbd\" (UID: \"606a03e6-0ad3-4369-9921-f68f56b278f4\") " pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.260053 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbrk\" (UniqueName: \"kubernetes.io/projected/447122e9-4195-4d8b-992d-dc435c22fa07-kube-api-access-zzbrk\") pod \"cinder-db-create-68wjz\" (UID: \"447122e9-4195-4d8b-992d-dc435c22fa07\") " pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.260733 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/447122e9-4195-4d8b-992d-dc435c22fa07-operator-scripts\") pod \"cinder-db-create-68wjz\" (UID: \"447122e9-4195-4d8b-992d-dc435c22fa07\") " pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.278969 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbrk\" (UniqueName: \"kubernetes.io/projected/447122e9-4195-4d8b-992d-dc435c22fa07-kube-api-access-zzbrk\") pod \"cinder-db-create-68wjz\" (UID: \"447122e9-4195-4d8b-992d-dc435c22fa07\") " pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.326677 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-gpcnr"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.328270 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.333083 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.343103 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-mmz5r"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.344786 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.347237 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrkzh" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.347401 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.347525 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.347817 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.360232 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-395f-account-create-update-btrf9"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.362008 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.363266 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855fda36-92fa-4c54-8976-43639fa2ee51-operator-scripts\") pod \"neutron-db-create-4jvsw\" (UID: \"855fda36-92fa-4c54-8976-43639fa2ee51\") " pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.363677 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blfp4\" (UniqueName: \"kubernetes.io/projected/855fda36-92fa-4c54-8976-43639fa2ee51-kube-api-access-blfp4\") pod \"neutron-db-create-4jvsw\" (UID: \"855fda36-92fa-4c54-8976-43639fa2ee51\") " pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.363776 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27gvf\" (UniqueName: \"kubernetes.io/projected/606a03e6-0ad3-4369-9921-f68f56b278f4-kube-api-access-27gvf\") pod \"cinder-0be0-account-create-update-8mkbd\" (UID: \"606a03e6-0ad3-4369-9921-f68f56b278f4\") " pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.364092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/606a03e6-0ad3-4369-9921-f68f56b278f4-operator-scripts\") pod \"cinder-0be0-account-create-update-8mkbd\" (UID: \"606a03e6-0ad3-4369-9921-f68f56b278f4\") " pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.365030 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/606a03e6-0ad3-4369-9921-f68f56b278f4-operator-scripts\") pod \"cinder-0be0-account-create-update-8mkbd\" (UID: \"606a03e6-0ad3-4369-9921-f68f56b278f4\") " pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.370062 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.379775 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gpcnr"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.390586 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27gvf\" (UniqueName: \"kubernetes.io/projected/606a03e6-0ad3-4369-9921-f68f56b278f4-kube-api-access-27gvf\") pod \"cinder-0be0-account-create-update-8mkbd\" (UID: \"606a03e6-0ad3-4369-9921-f68f56b278f4\") " pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.391676 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-395f-account-create-update-btrf9"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.412910 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mmz5r"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.450178 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.469120 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8358a03b-42d8-46b5-ab30-b4ac6486da4f-operator-scripts\") pod \"barbican-395f-account-create-update-btrf9\" (UID: \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\") " pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.469356 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-config-data\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.469387 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e42a2c-5486-4292-8810-da11833a706a-operator-scripts\") pod \"barbican-db-create-gpcnr\" (UID: \"54e42a2c-5486-4292-8810-da11833a706a\") " pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.469407 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-combined-ca-bundle\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.469859 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhz4\" (UniqueName: \"kubernetes.io/projected/8358a03b-42d8-46b5-ab30-b4ac6486da4f-kube-api-access-wzhz4\") pod \"barbican-395f-account-create-update-btrf9\" (UID: \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\") " pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.469956 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zz8\" (UniqueName: \"kubernetes.io/projected/54e42a2c-5486-4292-8810-da11833a706a-kube-api-access-r4zz8\") pod \"barbican-db-create-gpcnr\" (UID: \"54e42a2c-5486-4292-8810-da11833a706a\") " pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.470007 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855fda36-92fa-4c54-8976-43639fa2ee51-operator-scripts\") pod \"neutron-db-create-4jvsw\" (UID: \"855fda36-92fa-4c54-8976-43639fa2ee51\") " pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.470107 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x486\" (UniqueName: \"kubernetes.io/projected/efad1545-5a1e-45ab-bf50-952c2c8eeba9-kube-api-access-4x486\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.470154 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blfp4\" (UniqueName: \"kubernetes.io/projected/855fda36-92fa-4c54-8976-43639fa2ee51-kube-api-access-blfp4\") pod \"neutron-db-create-4jvsw\" (UID: \"855fda36-92fa-4c54-8976-43639fa2ee51\") " pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.471164 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855fda36-92fa-4c54-8976-43639fa2ee51-operator-scripts\") pod \"neutron-db-create-4jvsw\" (UID: \"855fda36-92fa-4c54-8976-43639fa2ee51\") " pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.492949 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blfp4\" (UniqueName: \"kubernetes.io/projected/855fda36-92fa-4c54-8976-43639fa2ee51-kube-api-access-blfp4\") pod \"neutron-db-create-4jvsw\" (UID: \"855fda36-92fa-4c54-8976-43639fa2ee51\") " pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.533952 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4eab-account-create-update-b4vhn"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.535563 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.538993 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.555772 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4eab-account-create-update-b4vhn"] Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.572481 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-config-data\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.572525 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-combined-ca-bundle\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.572541 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e42a2c-5486-4292-8810-da11833a706a-operator-scripts\") pod \"barbican-db-create-gpcnr\" (UID: \"54e42a2c-5486-4292-8810-da11833a706a\") " pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.572627 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzhz4\" (UniqueName: \"kubernetes.io/projected/8358a03b-42d8-46b5-ab30-b4ac6486da4f-kube-api-access-wzhz4\") pod \"barbican-395f-account-create-update-btrf9\" (UID: \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\") " pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.572657 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4zz8\" (UniqueName: \"kubernetes.io/projected/54e42a2c-5486-4292-8810-da11833a706a-kube-api-access-r4zz8\") pod \"barbican-db-create-gpcnr\" (UID: \"54e42a2c-5486-4292-8810-da11833a706a\") " pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.572798 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x486\" (UniqueName: \"kubernetes.io/projected/efad1545-5a1e-45ab-bf50-952c2c8eeba9-kube-api-access-4x486\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.572891 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8358a03b-42d8-46b5-ab30-b4ac6486da4f-operator-scripts\") pod \"barbican-395f-account-create-update-btrf9\" (UID: \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\") " pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.573574 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8358a03b-42d8-46b5-ab30-b4ac6486da4f-operator-scripts\") pod \"barbican-395f-account-create-update-btrf9\" (UID: \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\") " pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.577048 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e42a2c-5486-4292-8810-da11833a706a-operator-scripts\") pod \"barbican-db-create-gpcnr\" (UID: \"54e42a2c-5486-4292-8810-da11833a706a\") " pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.577523 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-config-data\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.578612 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-combined-ca-bundle\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.592698 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x486\" (UniqueName: \"kubernetes.io/projected/efad1545-5a1e-45ab-bf50-952c2c8eeba9-kube-api-access-4x486\") pod \"keystone-db-sync-mmz5r\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.595475 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4zz8\" (UniqueName: \"kubernetes.io/projected/54e42a2c-5486-4292-8810-da11833a706a-kube-api-access-r4zz8\") pod \"barbican-db-create-gpcnr\" (UID: \"54e42a2c-5486-4292-8810-da11833a706a\") " pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.596649 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzhz4\" (UniqueName: \"kubernetes.io/projected/8358a03b-42d8-46b5-ab30-b4ac6486da4f-kube-api-access-wzhz4\") pod \"barbican-395f-account-create-update-btrf9\" (UID: \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\") " pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.631693 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.646920 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.677454 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.678229 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcf9p\" (UniqueName: \"kubernetes.io/projected/e2b04989-4417-4b0e-9a41-f4980d079a45-kube-api-access-mcf9p\") pod \"neutron-4eab-account-create-update-b4vhn\" (UID: \"e2b04989-4417-4b0e-9a41-f4980d079a45\") " pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.678289 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b04989-4417-4b0e-9a41-f4980d079a45-operator-scripts\") pod \"neutron-4eab-account-create-update-b4vhn\" (UID: \"e2b04989-4417-4b0e-9a41-f4980d079a45\") " pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.749461 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.780519 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcf9p\" (UniqueName: \"kubernetes.io/projected/e2b04989-4417-4b0e-9a41-f4980d079a45-kube-api-access-mcf9p\") pod \"neutron-4eab-account-create-update-b4vhn\" (UID: \"e2b04989-4417-4b0e-9a41-f4980d079a45\") " pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.780591 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b04989-4417-4b0e-9a41-f4980d079a45-operator-scripts\") pod \"neutron-4eab-account-create-update-b4vhn\" (UID: \"e2b04989-4417-4b0e-9a41-f4980d079a45\") " pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.781208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b04989-4417-4b0e-9a41-f4980d079a45-operator-scripts\") pod \"neutron-4eab-account-create-update-b4vhn\" (UID: \"e2b04989-4417-4b0e-9a41-f4980d079a45\") " pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.799262 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcf9p\" (UniqueName: \"kubernetes.io/projected/e2b04989-4417-4b0e-9a41-f4980d079a45-kube-api-access-mcf9p\") pod \"neutron-4eab-account-create-update-b4vhn\" (UID: \"e2b04989-4417-4b0e-9a41-f4980d079a45\") " pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:24 crc kubenswrapper[4792]: I0319 17:06:24.865144 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:25 crc kubenswrapper[4792]: I0319 17:06:25.092061 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:06:25 crc kubenswrapper[4792]: I0319 17:06:25.181267 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-m4ldr"] Mar 19 17:06:25 crc kubenswrapper[4792]: I0319 17:06:25.181510 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" podUID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerName="dnsmasq-dns" containerID="cri-o://fc542e4924e7e32506d74040ac72fa74f8f9199dba68f18047a2145a61a7b577" gracePeriod=10 Mar 19 17:06:25 crc kubenswrapper[4792]: I0319 17:06:25.623498 4792 generic.go:334] "Generic (PLEG): container finished" podID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerID="fc542e4924e7e32506d74040ac72fa74f8f9199dba68f18047a2145a61a7b577" exitCode=0 Mar 19 17:06:25 crc kubenswrapper[4792]: I0319 17:06:25.623701 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" event={"ID":"eb6e8887-3924-4571-8733-6e3bf3a47454","Type":"ContainerDied","Data":"fc542e4924e7e32506d74040ac72fa74f8f9199dba68f18047a2145a61a7b577"} Mar 19 17:06:26 crc kubenswrapper[4792]: I0319 17:06:26.713646 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" podUID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Mar 19 17:06:28 crc kubenswrapper[4792]: I0319 17:06:28.856015 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.143:9090/-/ready\": dial tcp 10.217.0.143:9090: connect: connection refused" Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.724129 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gpcnr"] Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.757010 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tjs8v"] Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.767595 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-68wjz"] Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.781766 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4jvsw"] Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.795936 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-395f-account-create-update-btrf9"] Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.814227 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mmz5r"] Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.827126 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0be0-account-create-update-8mkbd"] Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.835671 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4eab-account-create-update-b4vhn"] Mar 19 17:06:29 crc kubenswrapper[4792]: I0319 17:06:29.979820 4792 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod77a33bb0-077b-4fd6-a000-2bb90dccd2be"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod77a33bb0-077b-4fd6-a000-2bb90dccd2be] : Timed out while waiting for systemd to remove kubepods-besteffort-pod77a33bb0_077b_4fd6_a000_2bb90dccd2be.slice" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.003027 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8475-account-create-update-j72dl"] Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.324245 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2b04989_4417_4b0e_9a41_f4980d079a45.slice/crio-1723b4b4257ba7fa8cf0a8a86f440f1a0fbc73abeeb68e5f391ca8cfb1a68be5 WatchSource:0}: Error finding container 1723b4b4257ba7fa8cf0a8a86f440f1a0fbc73abeeb68e5f391ca8cfb1a68be5: Status 404 returned error can't find the container with id 1723b4b4257ba7fa8cf0a8a86f440f1a0fbc73abeeb68e5f391ca8cfb1a68be5 Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.329267 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod855fda36_92fa_4c54_8976_43639fa2ee51.slice/crio-6beab12622c6e47225134e94753f6deb2f1a119f1d66377d188a84ac1f5eab69 WatchSource:0}: Error finding container 6beab12622c6e47225134e94753f6deb2f1a119f1d66377d188a84ac1f5eab69: Status 404 returned error can't find the container with id 6beab12622c6e47225134e94753f6deb2f1a119f1d66377d188a84ac1f5eab69 Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.332777 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde8eaf3e_d60d_4940_8fed_d307ef4afd12.slice/crio-57be36eb980abaa11ce337912e982d0db10116e13c2411e133a730ff72479db7 WatchSource:0}: Error finding container 57be36eb980abaa11ce337912e982d0db10116e13c2411e133a730ff72479db7: Status 404 returned error can't find the container with id 57be36eb980abaa11ce337912e982d0db10116e13c2411e133a730ff72479db7 Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.336831 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50dd4286_d2f2_4c9b_a80d_e4731ddc902b.slice/crio-3f726caa2822d51d897f6e33a3b83ee12b5e3f323b250e14f576c2de5ff619d1 WatchSource:0}: Error finding container 3f726caa2822d51d897f6e33a3b83ee12b5e3f323b250e14f576c2de5ff619d1: Status 404 returned error can't find the container with id 3f726caa2822d51d897f6e33a3b83ee12b5e3f323b250e14f576c2de5ff619d1 Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.340549 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod447122e9_4195_4d8b_992d_dc435c22fa07.slice/crio-4b988ef7f972f9690d8322ddaf377cfa77386dca245d4d7304837fb842750286 WatchSource:0}: Error finding container 4b988ef7f972f9690d8322ddaf377cfa77386dca245d4d7304837fb842750286: Status 404 returned error can't find the container with id 4b988ef7f972f9690d8322ddaf377cfa77386dca245d4d7304837fb842750286 Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.353394 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8358a03b_42d8_46b5_ab30_b4ac6486da4f.slice/crio-87d0e80883b7dbfef1acb8a3bd94874463ab144c11031c69821f16a5b24899d6 WatchSource:0}: Error finding container 87d0e80883b7dbfef1acb8a3bd94874463ab144c11031c69821f16a5b24899d6: Status 404 returned error can't find the container with id 87d0e80883b7dbfef1acb8a3bd94874463ab144c11031c69821f16a5b24899d6 Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.353796 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod606a03e6_0ad3_4369_9921_f68f56b278f4.slice/crio-3d3d6307f3157f7afa04189030f80c24131b594c606a5cf123e54df9a550cde8 WatchSource:0}: Error finding container 3d3d6307f3157f7afa04189030f80c24131b594c606a5cf123e54df9a550cde8: Status 404 returned error can't find the container with id 3d3d6307f3157f7afa04189030f80c24131b594c606a5cf123e54df9a550cde8 Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.357661 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54e42a2c_5486_4292_8810_da11833a706a.slice/crio-02f2ade7a1c837627044cb1180066a83f0c710dde3c78fb8ced53da28b537c48 WatchSource:0}: Error finding container 02f2ade7a1c837627044cb1180066a83f0c710dde3c78fb8ced53da28b537c48: Status 404 returned error can't find the container with id 02f2ade7a1c837627044cb1180066a83f0c710dde3c78fb8ced53da28b537c48 Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.365074 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefad1545_5a1e_45ab_bf50_952c2c8eeba9.slice/crio-dd1bd23e8de92b78048c12d8af8c4f48d13a1b3952913777860e10364b74177a WatchSource:0}: Error finding container dd1bd23e8de92b78048c12d8af8c4f48d13a1b3952913777860e10364b74177a: Status 404 returned error can't find the container with id dd1bd23e8de92b78048c12d8af8c4f48d13a1b3952913777860e10364b74177a Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.469488 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.475657 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.530589 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-thanos-prometheus-http-client-file\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.530670 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-0\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.530689 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-sb\") pod \"eb6e8887-3924-4571-8733-6e3bf3a47454\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.530705 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-nb\") pod \"eb6e8887-3924-4571-8733-6e3bf3a47454\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.530989 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-web-config\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531067 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-config\") pod \"eb6e8887-3924-4571-8733-6e3bf3a47454\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531127 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-dns-svc\") pod \"eb6e8887-3924-4571-8733-6e3bf3a47454\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531141 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531387 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531413 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config-out\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531434 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-1\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531459 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8crvq\" (UniqueName: \"kubernetes.io/projected/eb6e8887-3924-4571-8733-6e3bf3a47454-kube-api-access-8crvq\") pod \"eb6e8887-3924-4571-8733-6e3bf3a47454\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531511 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lw97\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-kube-api-access-7lw97\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531540 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-2\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.531564 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-tls-assets\") pod \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\" (UID: \"9f4ce965-a3ed-4d9f-918f-95ff40840ca5\") " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.536057 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.543748 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6e8887-3924-4571-8733-6e3bf3a47454-kube-api-access-8crvq" (OuterVolumeSpecName: "kube-api-access-8crvq") pod "eb6e8887-3924-4571-8733-6e3bf3a47454" (UID: "eb6e8887-3924-4571-8733-6e3bf3a47454"). InnerVolumeSpecName "kube-api-access-8crvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.544068 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.546063 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config" (OuterVolumeSpecName: "config") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.546422 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.548496 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.549873 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-kube-api-access-7lw97" (OuterVolumeSpecName: "kube-api-access-7lw97") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "kube-api-access-7lw97". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.581029 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config-out" (OuterVolumeSpecName: "config-out") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.583993 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.615069 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.636748 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-web-config" (OuterVolumeSpecName: "web-config") pod "9f4ce965-a3ed-4d9f-918f-95ff40840ca5" (UID: "9f4ce965-a3ed-4d9f-918f-95ff40840ca5"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.640766 4792 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.640805 4792 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.640821 4792 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-web-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.640849 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.640933 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") on node \"crc\" " Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.640956 4792 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-config-out\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.640976 4792 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.640995 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8crvq\" (UniqueName: \"kubernetes.io/projected/eb6e8887-3924-4571-8733-6e3bf3a47454-kube-api-access-8crvq\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.641009 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lw97\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-kube-api-access-7lw97\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.641022 4792 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.641034 4792 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f4ce965-a3ed-4d9f-918f-95ff40840ca5-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.696400 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-config" (OuterVolumeSpecName: "config") pod "eb6e8887-3924-4571-8733-6e3bf3a47454" (UID: "eb6e8887-3924-4571-8733-6e3bf3a47454"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.714812 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb6e8887-3924-4571-8733-6e3bf3a47454" (UID: "eb6e8887-3924-4571-8733-6e3bf3a47454"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.714900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tjs8v" event={"ID":"de8eaf3e-d60d-4940-8fed-d307ef4afd12","Type":"ContainerStarted","Data":"57be36eb980abaa11ce337912e982d0db10116e13c2411e133a730ff72479db7"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.716596 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.717012 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700") on node "crc" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.718163 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-68wjz" event={"ID":"447122e9-4195-4d8b-992d-dc435c22fa07","Type":"ContainerStarted","Data":"4b988ef7f972f9690d8322ddaf377cfa77386dca245d4d7304837fb842750286"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.727468 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gpcnr" event={"ID":"54e42a2c-5486-4292-8810-da11833a706a","Type":"ContainerStarted","Data":"02f2ade7a1c837627044cb1180066a83f0c710dde3c78fb8ced53da28b537c48"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.731183 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mmz5r" event={"ID":"efad1545-5a1e-45ab-bf50-952c2c8eeba9","Type":"ContainerStarted","Data":"dd1bd23e8de92b78048c12d8af8c4f48d13a1b3952913777860e10364b74177a"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.733697 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0be0-account-create-update-8mkbd" event={"ID":"606a03e6-0ad3-4369-9921-f68f56b278f4","Type":"ContainerStarted","Data":"3d3d6307f3157f7afa04189030f80c24131b594c606a5cf123e54df9a550cde8"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.736830 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4eab-account-create-update-b4vhn" event={"ID":"e2b04989-4417-4b0e-9a41-f4980d079a45","Type":"ContainerStarted","Data":"1723b4b4257ba7fa8cf0a8a86f440f1a0fbc73abeeb68e5f391ca8cfb1a68be5"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.740973 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" event={"ID":"eb6e8887-3924-4571-8733-6e3bf3a47454","Type":"ContainerDied","Data":"5908fbd8eef541488d3b830694a6d131c60db55983a69653f6ea3a52f4d84239"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.741097 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-m4ldr" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.741355 4792 scope.go:117] "RemoveContainer" containerID="fc542e4924e7e32506d74040ac72fa74f8f9199dba68f18047a2145a61a7b577" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.742423 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb6e8887-3924-4571-8733-6e3bf3a47454" (UID: "eb6e8887-3924-4571-8733-6e3bf3a47454"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.743054 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-dns-svc\") pod \"eb6e8887-3924-4571-8733-6e3bf3a47454\" (UID: \"eb6e8887-3924-4571-8733-6e3bf3a47454\") " Mar 19 17:06:30 crc kubenswrapper[4792]: W0319 17:06:30.743123 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/eb6e8887-3924-4571-8733-6e3bf3a47454/volumes/kubernetes.io~configmap/dns-svc Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.743450 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb6e8887-3924-4571-8733-6e3bf3a47454" (UID: "eb6e8887-3924-4571-8733-6e3bf3a47454"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.744362 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.744483 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.744658 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.744731 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.758872 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f4ce965-a3ed-4d9f-918f-95ff40840ca5","Type":"ContainerDied","Data":"135eef3b724949524d84de89ee2c44a5c1e778d1caa401e5148e8d966ac96e0d"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.759002 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.771359 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-395f-account-create-update-btrf9" event={"ID":"8358a03b-42d8-46b5-ab30-b4ac6486da4f","Type":"ContainerStarted","Data":"87d0e80883b7dbfef1acb8a3bd94874463ab144c11031c69821f16a5b24899d6"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.774286 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4jvsw" event={"ID":"855fda36-92fa-4c54-8976-43639fa2ee51","Type":"ContainerStarted","Data":"6beab12622c6e47225134e94753f6deb2f1a119f1d66377d188a84ac1f5eab69"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.776935 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8475-account-create-update-j72dl" event={"ID":"50dd4286-d2f2-4c9b-a80d-e4731ddc902b","Type":"ContainerStarted","Data":"3f726caa2822d51d897f6e33a3b83ee12b5e3f323b250e14f576c2de5ff619d1"} Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.793527 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb6e8887-3924-4571-8733-6e3bf3a47454" (UID: "eb6e8887-3924-4571-8733-6e3bf3a47454"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.847047 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb6e8887-3924-4571-8733-6e3bf3a47454-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.917162 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.932291 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.948635 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:06:30 crc kubenswrapper[4792]: E0319 17:06:30.949094 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="init-config-reloader" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.949135 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="init-config-reloader" Mar 19 17:06:30 crc kubenswrapper[4792]: E0319 17:06:30.949169 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="config-reloader" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.949176 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="config-reloader" Mar 19 17:06:30 crc kubenswrapper[4792]: E0319 17:06:30.949216 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerName="dnsmasq-dns" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.949222 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerName="dnsmasq-dns" Mar 19 17:06:30 crc kubenswrapper[4792]: E0319 17:06:30.949239 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="thanos-sidecar" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.949245 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="thanos-sidecar" Mar 19 17:06:30 crc kubenswrapper[4792]: E0319 17:06:30.949256 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerName="init" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.949264 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerName="init" Mar 19 17:06:30 crc kubenswrapper[4792]: E0319 17:06:30.949279 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="prometheus" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.949285 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="prometheus" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.950736 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="prometheus" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.950757 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6e8887-3924-4571-8733-6e3bf3a47454" containerName="dnsmasq-dns" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.950771 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="thanos-sidecar" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.950785 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" containerName="config-reloader" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.960251 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.962669 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.971142 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.971420 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.971825 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.972528 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lh4q9" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.972870 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.973005 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.973438 4792 scope.go:117] "RemoveContainer" containerID="4ff3ae105c28d1d24dde1d2ed774b42ab2b3a65611d7e3095d9b2d4b2e46d91a" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.974806 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.979625 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 19 17:06:30 crc kubenswrapper[4792]: I0319 17:06:30.986441 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.055365 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.055809 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.056291 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.056484 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/94c78995-4f1f-4eca-a3fb-df83caafa647-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.056799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp75d\" (UniqueName: \"kubernetes.io/projected/94c78995-4f1f-4eca-a3fb-df83caafa647-kube-api-access-jp75d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.056973 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.057569 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/94c78995-4f1f-4eca-a3fb-df83caafa647-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.058027 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.058277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.058453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.058694 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-config\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.059001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.059180 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.093181 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-m4ldr"] Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.104809 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-m4ldr"] Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161618 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161671 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/94c78995-4f1f-4eca-a3fb-df83caafa647-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161697 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp75d\" (UniqueName: \"kubernetes.io/projected/94c78995-4f1f-4eca-a3fb-df83caafa647-kube-api-access-jp75d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161727 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161762 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/94c78995-4f1f-4eca-a3fb-df83caafa647-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161858 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161897 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-config\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161935 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.161958 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.162017 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.162046 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.166718 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.169274 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.169300 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/94c78995-4f1f-4eca-a3fb-df83caafa647-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.174226 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-config\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.174384 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/94c78995-4f1f-4eca-a3fb-df83caafa647-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.174899 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.176148 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.176544 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.182248 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.182421 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ff2a51149070d10d9416b66fcd1d1cee37f55591d06c1c8c492c45e8a7bf5698/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.183992 4792 scope.go:117] "RemoveContainer" containerID="f3e965d403b1e60d7ff7f9cb0bae9dc1614602883d7752eabddea845d36e5baa" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.184963 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.185780 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/94c78995-4f1f-4eca-a3fb-df83caafa647-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.186973 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/94c78995-4f1f-4eca-a3fb-df83caafa647-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.190716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp75d\" (UniqueName: \"kubernetes.io/projected/94c78995-4f1f-4eca-a3fb-df83caafa647-kube-api-access-jp75d\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.242076 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60925757-4d08-4a03-9cd7-b8eb3e8f5700\") pod \"prometheus-metric-storage-0\" (UID: \"94c78995-4f1f-4eca-a3fb-df83caafa647\") " pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.291619 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.399637 4792 scope.go:117] "RemoveContainer" containerID="776a0bacbac4ae1eac591d9c7210c05c54b0c823ef91c0bdd03109e84e6d2412" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.462474 4792 scope.go:117] "RemoveContainer" containerID="cd0abb866fd23b8c463291a13648f842ac5d9e6a62f969963ee38ddf353f3bc1" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.508331 4792 scope.go:117] "RemoveContainer" containerID="231d6c9cabaeddc0645573fec9a25a446b0ee43c32924261524c79803290fc0d" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.771401 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4ce965-a3ed-4d9f-918f-95ff40840ca5" path="/var/lib/kubelet/pods/9f4ce965-a3ed-4d9f-918f-95ff40840ca5/volumes" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.772379 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6e8887-3924-4571-8733-6e3bf3a47454" path="/var/lib/kubelet/pods/eb6e8887-3924-4571-8733-6e3bf3a47454/volumes" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.846266 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4jvsw" event={"ID":"855fda36-92fa-4c54-8976-43639fa2ee51","Type":"ContainerStarted","Data":"8c64c2fbfaf423d4d5ee1649fa106580e4fa6598809de5a77fb177bc3f891de7"} Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.861179 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8475-account-create-update-j72dl" event={"ID":"50dd4286-d2f2-4c9b-a80d-e4731ddc902b","Type":"ContainerStarted","Data":"4ecc6b182f8af45cc28aa535c68f33dbcf0c8ab9802e1610402108d621d13500"} Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.863515 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tjs8v" event={"ID":"de8eaf3e-d60d-4940-8fed-d307ef4afd12","Type":"ContainerStarted","Data":"a515db8d10c065dbbaba4db5e26cdd1e01a4271ffe0fecbe58bd4625f3e618b3"} Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.872334 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-68wjz" event={"ID":"447122e9-4195-4d8b-992d-dc435c22fa07","Type":"ContainerStarted","Data":"db7f74c2ecc023c5a4ef594a7ac15a70c9ba2e00a8e643aebd0b5d68b73ea062"} Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.915401 4792 generic.go:334] "Generic (PLEG): container finished" podID="54e42a2c-5486-4292-8810-da11833a706a" containerID="ebc71e2e8892fb06195cc7dd18fb6b6c446e5f7568813734f5821cd5fdeb9d49" exitCode=0 Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.915568 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gpcnr" event={"ID":"54e42a2c-5486-4292-8810-da11833a706a","Type":"ContainerDied","Data":"ebc71e2e8892fb06195cc7dd18fb6b6c446e5f7568813734f5821cd5fdeb9d49"} Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.943734 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-4jvsw" podStartSLOduration=7.943715819 podStartE2EDuration="7.943715819s" podCreationTimestamp="2026-03-19 17:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:31.89529917 +0000 UTC m=+1555.041356710" watchObservedRunningTime="2026-03-19 17:06:31.943715819 +0000 UTC m=+1555.089773349" Mar 19 17:06:31 crc kubenswrapper[4792]: I0319 17:06:31.999039 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-68wjz" podStartSLOduration=8.999016317 podStartE2EDuration="8.999016317s" podCreationTimestamp="2026-03-19 17:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:31.936647415 +0000 UTC m=+1555.082704955" watchObservedRunningTime="2026-03-19 17:06:31.999016317 +0000 UTC m=+1555.145073857" Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.012709 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-tjs8v" podStartSLOduration=9.012691793 podStartE2EDuration="9.012691793s" podCreationTimestamp="2026-03-19 17:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:31.960260043 +0000 UTC m=+1555.106317583" watchObservedRunningTime="2026-03-19 17:06:32.012691793 +0000 UTC m=+1555.158749333" Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.056499 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-8475-account-create-update-j72dl" podStartSLOduration=9.056479615 podStartE2EDuration="9.056479615s" podCreationTimestamp="2026-03-19 17:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:32.004710663 +0000 UTC m=+1555.150768193" watchObservedRunningTime="2026-03-19 17:06:32.056479615 +0000 UTC m=+1555.202537155" Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.131055 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.928093 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xbdtj" event={"ID":"193d3d1f-e773-4b86-a176-ddb5c7727e39","Type":"ContainerStarted","Data":"abc92c4d5e332e7935d081fddda3e7e0b52da9373251052abda89113d457ad36"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.930549 4792 generic.go:334] "Generic (PLEG): container finished" podID="447122e9-4195-4d8b-992d-dc435c22fa07" containerID="db7f74c2ecc023c5a4ef594a7ac15a70c9ba2e00a8e643aebd0b5d68b73ea062" exitCode=0 Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.930616 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-68wjz" event={"ID":"447122e9-4195-4d8b-992d-dc435c22fa07","Type":"ContainerDied","Data":"db7f74c2ecc023c5a4ef594a7ac15a70c9ba2e00a8e643aebd0b5d68b73ea062"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.933091 4792 generic.go:334] "Generic (PLEG): container finished" podID="e2b04989-4417-4b0e-9a41-f4980d079a45" containerID="3558738f40913abb138aa214cad4d5f68e91f22874d7f3a7e6ecb70cc1109a8c" exitCode=0 Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.933165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4eab-account-create-update-b4vhn" event={"ID":"e2b04989-4417-4b0e-9a41-f4980d079a45","Type":"ContainerDied","Data":"3558738f40913abb138aa214cad4d5f68e91f22874d7f3a7e6ecb70cc1109a8c"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.936440 4792 generic.go:334] "Generic (PLEG): container finished" podID="606a03e6-0ad3-4369-9921-f68f56b278f4" containerID="e11b72e5f6c033109a62cf3fb361d74c864047f2ff5f54af8d17e9bac68c9fd4" exitCode=0 Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.936522 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0be0-account-create-update-8mkbd" event={"ID":"606a03e6-0ad3-4369-9921-f68f56b278f4","Type":"ContainerDied","Data":"e11b72e5f6c033109a62cf3fb361d74c864047f2ff5f54af8d17e9bac68c9fd4"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.939039 4792 generic.go:334] "Generic (PLEG): container finished" podID="8358a03b-42d8-46b5-ab30-b4ac6486da4f" containerID="7e1852b51e8511f8e10297200c675aa151f1049e2881185f705aefa87f55c0e0" exitCode=0 Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.939121 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-395f-account-create-update-btrf9" event={"ID":"8358a03b-42d8-46b5-ab30-b4ac6486da4f","Type":"ContainerDied","Data":"7e1852b51e8511f8e10297200c675aa151f1049e2881185f705aefa87f55c0e0"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.941118 4792 generic.go:334] "Generic (PLEG): container finished" podID="855fda36-92fa-4c54-8976-43639fa2ee51" containerID="8c64c2fbfaf423d4d5ee1649fa106580e4fa6598809de5a77fb177bc3f891de7" exitCode=0 Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.941145 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4jvsw" event={"ID":"855fda36-92fa-4c54-8976-43639fa2ee51","Type":"ContainerDied","Data":"8c64c2fbfaf423d4d5ee1649fa106580e4fa6598809de5a77fb177bc3f891de7"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.946262 4792 generic.go:334] "Generic (PLEG): container finished" podID="50dd4286-d2f2-4c9b-a80d-e4731ddc902b" containerID="4ecc6b182f8af45cc28aa535c68f33dbcf0c8ab9802e1610402108d621d13500" exitCode=0 Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.946350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8475-account-create-update-j72dl" event={"ID":"50dd4286-d2f2-4c9b-a80d-e4731ddc902b","Type":"ContainerDied","Data":"4ecc6b182f8af45cc28aa535c68f33dbcf0c8ab9802e1610402108d621d13500"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.948865 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"77cb387a-c012-4955-a0a9-272badd02d11","Type":"ContainerStarted","Data":"98ab4eb98530907bac5570762736c74bf0f49059bddeb51536ade08609e12178"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.950974 4792 generic.go:334] "Generic (PLEG): container finished" podID="de8eaf3e-d60d-4940-8fed-d307ef4afd12" containerID="a515db8d10c065dbbaba4db5e26cdd1e01a4271ffe0fecbe58bd4625f3e618b3" exitCode=0 Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.951072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tjs8v" event={"ID":"de8eaf3e-d60d-4940-8fed-d307ef4afd12","Type":"ContainerDied","Data":"a515db8d10c065dbbaba4db5e26cdd1e01a4271ffe0fecbe58bd4625f3e618b3"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.952774 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"94c78995-4f1f-4eca-a3fb-df83caafa647","Type":"ContainerStarted","Data":"9b2bffb91895e49ddc7593b63d083addf0c1109d93616d0f569ef09ac8d5d27f"} Mar 19 17:06:32 crc kubenswrapper[4792]: I0319 17:06:32.953119 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xbdtj" podStartSLOduration=5.625562661 podStartE2EDuration="22.953100668s" podCreationTimestamp="2026-03-19 17:06:10 +0000 UTC" firstStartedPulling="2026-03-19 17:06:11.413946915 +0000 UTC m=+1534.560004445" lastFinishedPulling="2026-03-19 17:06:28.741484912 +0000 UTC m=+1551.887542452" observedRunningTime="2026-03-19 17:06:32.945046807 +0000 UTC m=+1556.091104367" watchObservedRunningTime="2026-03-19 17:06:32.953100668 +0000 UTC m=+1556.099158208" Mar 19 17:06:33 crc kubenswrapper[4792]: I0319 17:06:33.049145 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.762128156 podStartE2EDuration="16.049124834s" podCreationTimestamp="2026-03-19 17:06:17 +0000 UTC" firstStartedPulling="2026-03-19 17:06:18.913689554 +0000 UTC m=+1542.059747084" lastFinishedPulling="2026-03-19 17:06:31.200686222 +0000 UTC m=+1554.346743762" observedRunningTime="2026-03-19 17:06:33.042264166 +0000 UTC m=+1556.188321716" watchObservedRunningTime="2026-03-19 17:06:33.049124834 +0000 UTC m=+1556.195182374" Mar 19 17:06:33 crc kubenswrapper[4792]: I0319 17:06:33.531599 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jg2vl" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" probeResult="failure" output=< Mar 19 17:06:33 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:06:33 crc kubenswrapper[4792]: > Mar 19 17:06:35 crc kubenswrapper[4792]: I0319 17:06:35.988416 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"94c78995-4f1f-4eca-a3fb-df83caafa647","Type":"ContainerStarted","Data":"627a000cff0fca1512c3cdfe3cee4793a8b8798eda7e1879a1a0ca2fc30ef081"} Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.695247 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.702142 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.714556 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.735121 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dbgm\" (UniqueName: \"kubernetes.io/projected/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-kube-api-access-2dbgm\") pod \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\" (UID: \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\") " Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.735337 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-operator-scripts\") pod \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\" (UID: \"50dd4286-d2f2-4c9b-a80d-e4731ddc902b\") " Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.735833 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50dd4286-d2f2-4c9b-a80d-e4731ddc902b" (UID: "50dd4286-d2f2-4c9b-a80d-e4731ddc902b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.783252 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-kube-api-access-2dbgm" (OuterVolumeSpecName: "kube-api-access-2dbgm") pod "50dd4286-d2f2-4c9b-a80d-e4731ddc902b" (UID: "50dd4286-d2f2-4c9b-a80d-e4731ddc902b"). InnerVolumeSpecName "kube-api-access-2dbgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.838263 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e42a2c-5486-4292-8810-da11833a706a-operator-scripts\") pod \"54e42a2c-5486-4292-8810-da11833a706a\" (UID: \"54e42a2c-5486-4292-8810-da11833a706a\") " Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.838376 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4zz8\" (UniqueName: \"kubernetes.io/projected/54e42a2c-5486-4292-8810-da11833a706a-kube-api-access-r4zz8\") pod \"54e42a2c-5486-4292-8810-da11833a706a\" (UID: \"54e42a2c-5486-4292-8810-da11833a706a\") " Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.838428 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b04989-4417-4b0e-9a41-f4980d079a45-operator-scripts\") pod \"e2b04989-4417-4b0e-9a41-f4980d079a45\" (UID: \"e2b04989-4417-4b0e-9a41-f4980d079a45\") " Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.838540 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcf9p\" (UniqueName: \"kubernetes.io/projected/e2b04989-4417-4b0e-9a41-f4980d079a45-kube-api-access-mcf9p\") pod \"e2b04989-4417-4b0e-9a41-f4980d079a45\" (UID: \"e2b04989-4417-4b0e-9a41-f4980d079a45\") " Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.839733 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b04989-4417-4b0e-9a41-f4980d079a45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2b04989-4417-4b0e-9a41-f4980d079a45" (UID: "e2b04989-4417-4b0e-9a41-f4980d079a45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.840307 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.840332 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b04989-4417-4b0e-9a41-f4980d079a45-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.840343 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dbgm\" (UniqueName: \"kubernetes.io/projected/50dd4286-d2f2-4c9b-a80d-e4731ddc902b-kube-api-access-2dbgm\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.841081 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e42a2c-5486-4292-8810-da11833a706a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54e42a2c-5486-4292-8810-da11833a706a" (UID: "54e42a2c-5486-4292-8810-da11833a706a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.853458 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e42a2c-5486-4292-8810-da11833a706a-kube-api-access-r4zz8" (OuterVolumeSpecName: "kube-api-access-r4zz8") pod "54e42a2c-5486-4292-8810-da11833a706a" (UID: "54e42a2c-5486-4292-8810-da11833a706a"). InnerVolumeSpecName "kube-api-access-r4zz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.855037 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b04989-4417-4b0e-9a41-f4980d079a45-kube-api-access-mcf9p" (OuterVolumeSpecName: "kube-api-access-mcf9p") pod "e2b04989-4417-4b0e-9a41-f4980d079a45" (UID: "e2b04989-4417-4b0e-9a41-f4980d079a45"). InnerVolumeSpecName "kube-api-access-mcf9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.856609 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.909783 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.919769 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.933016 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.941512 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzhz4\" (UniqueName: \"kubernetes.io/projected/8358a03b-42d8-46b5-ab30-b4ac6486da4f-kube-api-access-wzhz4\") pod \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\" (UID: \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\") " Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.941717 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8358a03b-42d8-46b5-ab30-b4ac6486da4f-operator-scripts\") pod \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\" (UID: \"8358a03b-42d8-46b5-ab30-b4ac6486da4f\") " Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.942141 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8358a03b-42d8-46b5-ab30-b4ac6486da4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8358a03b-42d8-46b5-ab30-b4ac6486da4f" (UID: "8358a03b-42d8-46b5-ab30-b4ac6486da4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.942282 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e42a2c-5486-4292-8810-da11833a706a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.942305 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4zz8\" (UniqueName: \"kubernetes.io/projected/54e42a2c-5486-4292-8810-da11833a706a-kube-api-access-r4zz8\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.942320 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8358a03b-42d8-46b5-ab30-b4ac6486da4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.942333 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcf9p\" (UniqueName: \"kubernetes.io/projected/e2b04989-4417-4b0e-9a41-f4980d079a45-kube-api-access-mcf9p\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.945411 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8358a03b-42d8-46b5-ab30-b4ac6486da4f-kube-api-access-wzhz4" (OuterVolumeSpecName: "kube-api-access-wzhz4") pod "8358a03b-42d8-46b5-ab30-b4ac6486da4f" (UID: "8358a03b-42d8-46b5-ab30-b4ac6486da4f"). InnerVolumeSpecName "kube-api-access-wzhz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:36 crc kubenswrapper[4792]: I0319 17:06:36.952622 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.002667 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4eab-account-create-update-b4vhn" event={"ID":"e2b04989-4417-4b0e-9a41-f4980d079a45","Type":"ContainerDied","Data":"1723b4b4257ba7fa8cf0a8a86f440f1a0fbc73abeeb68e5f391ca8cfb1a68be5"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.002708 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1723b4b4257ba7fa8cf0a8a86f440f1a0fbc73abeeb68e5f391ca8cfb1a68be5" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.002760 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4eab-account-create-update-b4vhn" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.005936 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4jvsw" event={"ID":"855fda36-92fa-4c54-8976-43639fa2ee51","Type":"ContainerDied","Data":"6beab12622c6e47225134e94753f6deb2f1a119f1d66377d188a84ac1f5eab69"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.005970 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6beab12622c6e47225134e94753f6deb2f1a119f1d66377d188a84ac1f5eab69" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.006022 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4jvsw" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.007676 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8475-account-create-update-j72dl" event={"ID":"50dd4286-d2f2-4c9b-a80d-e4731ddc902b","Type":"ContainerDied","Data":"3f726caa2822d51d897f6e33a3b83ee12b5e3f323b250e14f576c2de5ff619d1"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.007703 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f726caa2822d51d897f6e33a3b83ee12b5e3f323b250e14f576c2de5ff619d1" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.007750 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8475-account-create-update-j72dl" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.017227 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tjs8v" event={"ID":"de8eaf3e-d60d-4940-8fed-d307ef4afd12","Type":"ContainerDied","Data":"57be36eb980abaa11ce337912e982d0db10116e13c2411e133a730ff72479db7"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.017273 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57be36eb980abaa11ce337912e982d0db10116e13c2411e133a730ff72479db7" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.017346 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tjs8v" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.019179 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-68wjz" event={"ID":"447122e9-4195-4d8b-992d-dc435c22fa07","Type":"ContainerDied","Data":"4b988ef7f972f9690d8322ddaf377cfa77386dca245d4d7304837fb842750286"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.019212 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b988ef7f972f9690d8322ddaf377cfa77386dca245d4d7304837fb842750286" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.019261 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-68wjz" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.020975 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gpcnr" event={"ID":"54e42a2c-5486-4292-8810-da11833a706a","Type":"ContainerDied","Data":"02f2ade7a1c837627044cb1180066a83f0c710dde3c78fb8ced53da28b537c48"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.021010 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f2ade7a1c837627044cb1180066a83f0c710dde3c78fb8ced53da28b537c48" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.021034 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gpcnr" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.022664 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0be0-account-create-update-8mkbd" event={"ID":"606a03e6-0ad3-4369-9921-f68f56b278f4","Type":"ContainerDied","Data":"3d3d6307f3157f7afa04189030f80c24131b594c606a5cf123e54df9a550cde8"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.022691 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d3d6307f3157f7afa04189030f80c24131b594c606a5cf123e54df9a550cde8" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.022737 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0be0-account-create-update-8mkbd" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.025514 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-395f-account-create-update-btrf9" event={"ID":"8358a03b-42d8-46b5-ab30-b4ac6486da4f","Type":"ContainerDied","Data":"87d0e80883b7dbfef1acb8a3bd94874463ab144c11031c69821f16a5b24899d6"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.025540 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87d0e80883b7dbfef1acb8a3bd94874463ab144c11031c69821f16a5b24899d6" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.025573 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-395f-account-create-update-btrf9" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.029909 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mmz5r" event={"ID":"efad1545-5a1e-45ab-bf50-952c2c8eeba9","Type":"ContainerStarted","Data":"2e4da9393a4cd015582f8bc3e191fdf46ffd0695a530de29cc25437e905395c1"} Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.043349 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z49bh\" (UniqueName: \"kubernetes.io/projected/de8eaf3e-d60d-4940-8fed-d307ef4afd12-kube-api-access-z49bh\") pod \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\" (UID: \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\") " Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.043399 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/447122e9-4195-4d8b-992d-dc435c22fa07-operator-scripts\") pod \"447122e9-4195-4d8b-992d-dc435c22fa07\" (UID: \"447122e9-4195-4d8b-992d-dc435c22fa07\") " Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.043432 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzbrk\" (UniqueName: \"kubernetes.io/projected/447122e9-4195-4d8b-992d-dc435c22fa07-kube-api-access-zzbrk\") pod \"447122e9-4195-4d8b-992d-dc435c22fa07\" (UID: \"447122e9-4195-4d8b-992d-dc435c22fa07\") " Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.043464 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8eaf3e-d60d-4940-8fed-d307ef4afd12-operator-scripts\") pod \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\" (UID: \"de8eaf3e-d60d-4940-8fed-d307ef4afd12\") " Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.043531 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27gvf\" (UniqueName: \"kubernetes.io/projected/606a03e6-0ad3-4369-9921-f68f56b278f4-kube-api-access-27gvf\") pod \"606a03e6-0ad3-4369-9921-f68f56b278f4\" (UID: \"606a03e6-0ad3-4369-9921-f68f56b278f4\") " Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.043694 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855fda36-92fa-4c54-8976-43639fa2ee51-operator-scripts\") pod \"855fda36-92fa-4c54-8976-43639fa2ee51\" (UID: \"855fda36-92fa-4c54-8976-43639fa2ee51\") " Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.043720 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/606a03e6-0ad3-4369-9921-f68f56b278f4-operator-scripts\") pod \"606a03e6-0ad3-4369-9921-f68f56b278f4\" (UID: \"606a03e6-0ad3-4369-9921-f68f56b278f4\") " Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.043756 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blfp4\" (UniqueName: \"kubernetes.io/projected/855fda36-92fa-4c54-8976-43639fa2ee51-kube-api-access-blfp4\") pod \"855fda36-92fa-4c54-8976-43639fa2ee51\" (UID: \"855fda36-92fa-4c54-8976-43639fa2ee51\") " Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.044174 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzhz4\" (UniqueName: \"kubernetes.io/projected/8358a03b-42d8-46b5-ab30-b4ac6486da4f-kube-api-access-wzhz4\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.044168 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/447122e9-4195-4d8b-992d-dc435c22fa07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "447122e9-4195-4d8b-992d-dc435c22fa07" (UID: "447122e9-4195-4d8b-992d-dc435c22fa07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.044485 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855fda36-92fa-4c54-8976-43639fa2ee51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "855fda36-92fa-4c54-8976-43639fa2ee51" (UID: "855fda36-92fa-4c54-8976-43639fa2ee51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.044708 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de8eaf3e-d60d-4940-8fed-d307ef4afd12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de8eaf3e-d60d-4940-8fed-d307ef4afd12" (UID: "de8eaf3e-d60d-4940-8fed-d307ef4afd12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.045183 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/606a03e6-0ad3-4369-9921-f68f56b278f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "606a03e6-0ad3-4369-9921-f68f56b278f4" (UID: "606a03e6-0ad3-4369-9921-f68f56b278f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.051093 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606a03e6-0ad3-4369-9921-f68f56b278f4-kube-api-access-27gvf" (OuterVolumeSpecName: "kube-api-access-27gvf") pod "606a03e6-0ad3-4369-9921-f68f56b278f4" (UID: "606a03e6-0ad3-4369-9921-f68f56b278f4"). InnerVolumeSpecName "kube-api-access-27gvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.054436 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-mmz5r" podStartSLOduration=6.904203182 podStartE2EDuration="13.054416806s" podCreationTimestamp="2026-03-19 17:06:24 +0000 UTC" firstStartedPulling="2026-03-19 17:06:30.369394761 +0000 UTC m=+1553.515452301" lastFinishedPulling="2026-03-19 17:06:36.519608385 +0000 UTC m=+1559.665665925" observedRunningTime="2026-03-19 17:06:37.04691517 +0000 UTC m=+1560.192972710" watchObservedRunningTime="2026-03-19 17:06:37.054416806 +0000 UTC m=+1560.200474336" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.064325 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447122e9-4195-4d8b-992d-dc435c22fa07-kube-api-access-zzbrk" (OuterVolumeSpecName: "kube-api-access-zzbrk") pod "447122e9-4195-4d8b-992d-dc435c22fa07" (UID: "447122e9-4195-4d8b-992d-dc435c22fa07"). InnerVolumeSpecName "kube-api-access-zzbrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.077457 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8eaf3e-d60d-4940-8fed-d307ef4afd12-kube-api-access-z49bh" (OuterVolumeSpecName: "kube-api-access-z49bh") pod "de8eaf3e-d60d-4940-8fed-d307ef4afd12" (UID: "de8eaf3e-d60d-4940-8fed-d307ef4afd12"). InnerVolumeSpecName "kube-api-access-z49bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.077605 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855fda36-92fa-4c54-8976-43639fa2ee51-kube-api-access-blfp4" (OuterVolumeSpecName: "kube-api-access-blfp4") pod "855fda36-92fa-4c54-8976-43639fa2ee51" (UID: "855fda36-92fa-4c54-8976-43639fa2ee51"). InnerVolumeSpecName "kube-api-access-blfp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.155015 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzbrk\" (UniqueName: \"kubernetes.io/projected/447122e9-4195-4d8b-992d-dc435c22fa07-kube-api-access-zzbrk\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.155055 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8eaf3e-d60d-4940-8fed-d307ef4afd12-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.155069 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27gvf\" (UniqueName: \"kubernetes.io/projected/606a03e6-0ad3-4369-9921-f68f56b278f4-kube-api-access-27gvf\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.155083 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855fda36-92fa-4c54-8976-43639fa2ee51-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.155096 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/606a03e6-0ad3-4369-9921-f68f56b278f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.155110 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blfp4\" (UniqueName: \"kubernetes.io/projected/855fda36-92fa-4c54-8976-43639fa2ee51-kube-api-access-blfp4\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.155124 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z49bh\" (UniqueName: \"kubernetes.io/projected/de8eaf3e-d60d-4940-8fed-d307ef4afd12-kube-api-access-z49bh\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:37 crc kubenswrapper[4792]: I0319 17:06:37.155137 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/447122e9-4195-4d8b-992d-dc435c22fa07-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:41 crc kubenswrapper[4792]: I0319 17:06:41.069700 4792 generic.go:334] "Generic (PLEG): container finished" podID="efad1545-5a1e-45ab-bf50-952c2c8eeba9" containerID="2e4da9393a4cd015582f8bc3e191fdf46ffd0695a530de29cc25437e905395c1" exitCode=0 Mar 19 17:06:41 crc kubenswrapper[4792]: I0319 17:06:41.069802 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mmz5r" event={"ID":"efad1545-5a1e-45ab-bf50-952c2c8eeba9","Type":"ContainerDied","Data":"2e4da9393a4cd015582f8bc3e191fdf46ffd0695a530de29cc25437e905395c1"} Mar 19 17:06:41 crc kubenswrapper[4792]: I0319 17:06:41.071954 4792 generic.go:334] "Generic (PLEG): container finished" podID="94c78995-4f1f-4eca-a3fb-df83caafa647" containerID="627a000cff0fca1512c3cdfe3cee4793a8b8798eda7e1879a1a0ca2fc30ef081" exitCode=0 Mar 19 17:06:41 crc kubenswrapper[4792]: I0319 17:06:41.071987 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"94c78995-4f1f-4eca-a3fb-df83caafa647","Type":"ContainerDied","Data":"627a000cff0fca1512c3cdfe3cee4793a8b8798eda7e1879a1a0ca2fc30ef081"} Mar 19 17:06:41 crc kubenswrapper[4792]: I0319 17:06:41.667268 4792 scope.go:117] "RemoveContainer" containerID="aa8700972a4cfdcc197ed3f3051a23c9b3d30c93ae41668692c66ec3a83b6958" Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.086527 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"94c78995-4f1f-4eca-a3fb-df83caafa647","Type":"ContainerStarted","Data":"c73e2989c83e375dd6ce0efeba07bacf9e64de2594d46b2d967e7fabb3279e77"} Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.486596 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.610549 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-combined-ca-bundle\") pod \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.610614 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-config-data\") pod \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.610692 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x486\" (UniqueName: \"kubernetes.io/projected/efad1545-5a1e-45ab-bf50-952c2c8eeba9-kube-api-access-4x486\") pod \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\" (UID: \"efad1545-5a1e-45ab-bf50-952c2c8eeba9\") " Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.623193 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efad1545-5a1e-45ab-bf50-952c2c8eeba9-kube-api-access-4x486" (OuterVolumeSpecName: "kube-api-access-4x486") pod "efad1545-5a1e-45ab-bf50-952c2c8eeba9" (UID: "efad1545-5a1e-45ab-bf50-952c2c8eeba9"). InnerVolumeSpecName "kube-api-access-4x486". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.640199 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efad1545-5a1e-45ab-bf50-952c2c8eeba9" (UID: "efad1545-5a1e-45ab-bf50-952c2c8eeba9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.669495 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-config-data" (OuterVolumeSpecName: "config-data") pod "efad1545-5a1e-45ab-bf50-952c2c8eeba9" (UID: "efad1545-5a1e-45ab-bf50-952c2c8eeba9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.712978 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.713010 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efad1545-5a1e-45ab-bf50-952c2c8eeba9-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:42 crc kubenswrapper[4792]: I0319 17:06:42.713023 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x486\" (UniqueName: \"kubernetes.io/projected/efad1545-5a1e-45ab-bf50-952c2c8eeba9-kube-api-access-4x486\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.098612 4792 generic.go:334] "Generic (PLEG): container finished" podID="193d3d1f-e773-4b86-a176-ddb5c7727e39" containerID="abc92c4d5e332e7935d081fddda3e7e0b52da9373251052abda89113d457ad36" exitCode=0 Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.098690 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xbdtj" event={"ID":"193d3d1f-e773-4b86-a176-ddb5c7727e39","Type":"ContainerDied","Data":"abc92c4d5e332e7935d081fddda3e7e0b52da9373251052abda89113d457ad36"} Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.100796 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mmz5r" event={"ID":"efad1545-5a1e-45ab-bf50-952c2c8eeba9","Type":"ContainerDied","Data":"dd1bd23e8de92b78048c12d8af8c4f48d13a1b3952913777860e10364b74177a"} Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.100876 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mmz5r" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.100891 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd1bd23e8de92b78048c12d8af8c4f48d13a1b3952913777860e10364b74177a" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.305907 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pm7qd"] Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.306850 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8eaf3e-d60d-4940-8fed-d307ef4afd12" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.306944 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8eaf3e-d60d-4940-8fed-d307ef4afd12" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.307011 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efad1545-5a1e-45ab-bf50-952c2c8eeba9" containerName="keystone-db-sync" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.307073 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="efad1545-5a1e-45ab-bf50-952c2c8eeba9" containerName="keystone-db-sync" Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.307142 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855fda36-92fa-4c54-8976-43639fa2ee51" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.307193 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="855fda36-92fa-4c54-8976-43639fa2ee51" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.307256 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b04989-4417-4b0e-9a41-f4980d079a45" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.307308 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b04989-4417-4b0e-9a41-f4980d079a45" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.307367 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606a03e6-0ad3-4369-9921-f68f56b278f4" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.307420 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="606a03e6-0ad3-4369-9921-f68f56b278f4" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.307478 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447122e9-4195-4d8b-992d-dc435c22fa07" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.307534 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="447122e9-4195-4d8b-992d-dc435c22fa07" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.307608 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e42a2c-5486-4292-8810-da11833a706a" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.307676 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e42a2c-5486-4292-8810-da11833a706a" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.307733 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50dd4286-d2f2-4c9b-a80d-e4731ddc902b" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.307787 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="50dd4286-d2f2-4c9b-a80d-e4731ddc902b" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: E0319 17:06:43.307855 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8358a03b-42d8-46b5-ab30-b4ac6486da4f" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.307917 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8358a03b-42d8-46b5-ab30-b4ac6486da4f" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308243 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="606a03e6-0ad3-4369-9921-f68f56b278f4" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308312 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e42a2c-5486-4292-8810-da11833a706a" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308368 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="50dd4286-d2f2-4c9b-a80d-e4731ddc902b" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308428 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="447122e9-4195-4d8b-992d-dc435c22fa07" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308493 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8358a03b-42d8-46b5-ab30-b4ac6486da4f" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308557 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="efad1545-5a1e-45ab-bf50-952c2c8eeba9" containerName="keystone-db-sync" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308615 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8eaf3e-d60d-4940-8fed-d307ef4afd12" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308675 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b04989-4417-4b0e-9a41-f4980d079a45" containerName="mariadb-account-create-update" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.308734 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="855fda36-92fa-4c54-8976-43639fa2ee51" containerName="mariadb-database-create" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.309531 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.311827 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.319110 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kqqzt"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.320933 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.325804 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.326044 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.326265 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.326379 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrkzh" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.331482 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pm7qd"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.363266 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kqqzt"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.429556 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-credential-keys\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.429602 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkx78\" (UniqueName: \"kubernetes.io/projected/60077a8d-2acb-4c40-8052-f79382e6e373-kube-api-access-tkx78\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.429621 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ppbx\" (UniqueName: \"kubernetes.io/projected/7c29b201-cfdb-4e28-9752-722c118b6452-kube-api-access-6ppbx\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.429668 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-config-data\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.430125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.430196 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-config\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.430213 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.430263 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-combined-ca-bundle\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.430305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-scripts\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.430322 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-fernet-keys\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.430344 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.430394 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.453984 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-r6f9z"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.456093 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.460361 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.460545 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-dfllr" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.507021 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-r6f9z"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.532150 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-combined-ca-bundle\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.532227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-scripts\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.532248 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-fernet-keys\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.532269 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.532295 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-combined-ca-bundle\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.532330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533487 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-credential-keys\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533522 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkx78\" (UniqueName: \"kubernetes.io/projected/60077a8d-2acb-4c40-8052-f79382e6e373-kube-api-access-tkx78\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ppbx\" (UniqueName: \"kubernetes.io/projected/7c29b201-cfdb-4e28-9752-722c118b6452-kube-api-access-6ppbx\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533606 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-config-data\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-config-data\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533641 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gsh\" (UniqueName: \"kubernetes.io/projected/cdaaa799-71ff-429b-86fe-bbe4e903984f-kube-api-access-q9gsh\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533768 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-config\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.533787 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.534678 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.536928 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.539457 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.540012 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.540538 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-config\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.549865 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-fernet-keys\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.550337 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-combined-ca-bundle\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.550648 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-config-data\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.552852 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-credential-keys\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.555530 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jg2vl" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" probeResult="failure" output=< Mar 19 17:06:43 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:06:43 crc kubenswrapper[4792]: > Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.584269 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6ftwc"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.584515 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ppbx\" (UniqueName: \"kubernetes.io/projected/7c29b201-cfdb-4e28-9752-722c118b6452-kube-api-access-6ppbx\") pod \"dnsmasq-dns-6f8c45789f-kqqzt\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.585695 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.593112 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-g5zgx" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.609656 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.609831 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.619015 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6ftwc"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.630361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-scripts\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.639282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-config-data\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.639569 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gsh\" (UniqueName: \"kubernetes.io/projected/cdaaa799-71ff-429b-86fe-bbe4e903984f-kube-api-access-q9gsh\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.639830 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-combined-ca-bundle\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.650275 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkx78\" (UniqueName: \"kubernetes.io/projected/60077a8d-2acb-4c40-8052-f79382e6e373-kube-api-access-tkx78\") pod \"keystone-bootstrap-pm7qd\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.652960 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-86jjn"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.654683 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.659940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-combined-ca-bundle\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.660818 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-config-data\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.673411 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kztlw" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.673629 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.673764 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.674365 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.675820 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.683018 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2chz4"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.684530 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.696325 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.696465 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.696558 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9ghfj" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.750648 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef634102-a683-498b-ad98-61d470f7fefa-etc-machine-id\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.750694 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-scripts\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.750720 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-combined-ca-bundle\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.750782 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9l5\" (UniqueName: \"kubernetes.io/projected/567d324f-126d-4f06-91df-d2d84fd836f3-kube-api-access-zm9l5\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.750857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-db-sync-config-data\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.751452 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-scripts\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.751504 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjwlt\" (UniqueName: \"kubernetes.io/projected/ef634102-a683-498b-ad98-61d470f7fefa-kube-api-access-zjwlt\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.751538 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-combined-ca-bundle\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.751589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-config-data\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.751607 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567d324f-126d-4f06-91df-d2d84fd836f3-logs\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.751631 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-config-data\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.857610 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gsh\" (UniqueName: \"kubernetes.io/projected/cdaaa799-71ff-429b-86fe-bbe4e903984f-kube-api-access-q9gsh\") pod \"heat-db-sync-r6f9z\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870310 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-db-sync-config-data\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870379 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-scripts\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870427 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjwlt\" (UniqueName: \"kubernetes.io/projected/ef634102-a683-498b-ad98-61d470f7fefa-kube-api-access-zjwlt\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870455 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-combined-ca-bundle\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870486 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-combined-ca-bundle\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870522 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-config-data\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870542 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567d324f-126d-4f06-91df-d2d84fd836f3-logs\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870565 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-config-data\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870590 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-config\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870639 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef634102-a683-498b-ad98-61d470f7fefa-etc-machine-id\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870657 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-scripts\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870675 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-combined-ca-bundle\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870699 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9l5\" (UniqueName: \"kubernetes.io/projected/567d324f-126d-4f06-91df-d2d84fd836f3-kube-api-access-zm9l5\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.870716 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bps7q\" (UniqueName: \"kubernetes.io/projected/13b09649-8b3c-4328-97b8-5c5c8d3e198b-kube-api-access-bps7q\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.881336 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567d324f-126d-4f06-91df-d2d84fd836f3-logs\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.883894 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef634102-a683-498b-ad98-61d470f7fefa-etc-machine-id\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.887679 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-db-sync-config-data\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.890549 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-scripts\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.894577 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-combined-ca-bundle\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.896489 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-config-data\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.901128 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-config-data\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.913098 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-combined-ca-bundle\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.916210 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-scripts\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.919238 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2chz4"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.919280 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kqqzt"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.939219 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjwlt\" (UniqueName: \"kubernetes.io/projected/ef634102-a683-498b-ad98-61d470f7fefa-kube-api-access-zjwlt\") pod \"cinder-db-sync-6ftwc\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.940929 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-86jjn"] Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.951778 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9l5\" (UniqueName: \"kubernetes.io/projected/567d324f-126d-4f06-91df-d2d84fd836f3-kube-api-access-zm9l5\") pod \"placement-db-sync-86jjn\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.975412 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-combined-ca-bundle\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.975488 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-config\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:43 crc kubenswrapper[4792]: I0319 17:06:43.975547 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bps7q\" (UniqueName: \"kubernetes.io/projected/13b09649-8b3c-4328-97b8-5c5c8d3e198b-kube-api-access-bps7q\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.002548 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bfbgb"] Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.004373 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.079929 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bfbgb"] Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.084452 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.107534 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r6f9z" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.108007 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-86jjn" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.124785 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-combined-ca-bundle\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.124888 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-config\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.125588 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bps7q\" (UniqueName: \"kubernetes.io/projected/13b09649-8b3c-4328-97b8-5c5c8d3e198b-kube-api-access-bps7q\") pod \"neutron-db-sync-2chz4\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.138493 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wm2lm"] Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.147423 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.177617 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7t5sq" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.177917 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.184098 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.184183 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.184230 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-config\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.184254 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.184324 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7th9t\" (UniqueName: \"kubernetes.io/projected/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-kube-api-access-7th9t\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.184354 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.223692 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wm2lm"] Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.320170 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324056 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-db-sync-config-data\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324091 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-combined-ca-bundle\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324135 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324160 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvlvc\" (UniqueName: \"kubernetes.io/projected/03107c0e-b888-4df4-892a-daebb217a18e-kube-api-access-zvlvc\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324204 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324246 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-config\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324268 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7th9t\" (UniqueName: \"kubernetes.io/projected/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-kube-api-access-7th9t\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.324354 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.325392 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.334486 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.337942 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.338453 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.348096 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.348708 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.355446 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.358421 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.377194 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-config\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.378743 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7th9t\" (UniqueName: \"kubernetes.io/projected/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-kube-api-access-7th9t\") pod \"dnsmasq-dns-fcfdd6f9f-bfbgb\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.426059 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-db-sync-config-data\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.426103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-combined-ca-bundle\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.426142 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvlvc\" (UniqueName: \"kubernetes.io/projected/03107c0e-b888-4df4-892a-daebb217a18e-kube-api-access-zvlvc\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.432334 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-db-sync-config-data\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.444489 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-combined-ca-bundle\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.445463 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2chz4" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.493923 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvlvc\" (UniqueName: \"kubernetes.io/projected/03107c0e-b888-4df4-892a-daebb217a18e-kube-api-access-zvlvc\") pod \"barbican-db-sync-wm2lm\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.528404 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-config-data\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.528771 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-log-httpd\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.528896 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-scripts\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.528961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.529021 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-run-httpd\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.533150 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.533188 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2ff\" (UniqueName: \"kubernetes.io/projected/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-kube-api-access-zt2ff\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.546390 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.570947 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.639323 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-config-data\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.639409 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-log-httpd\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.639456 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-scripts\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.639487 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.639534 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-run-httpd\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.639625 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.639644 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt2ff\" (UniqueName: \"kubernetes.io/projected/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-kube-api-access-zt2ff\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.639927 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-log-httpd\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.640604 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-run-httpd\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.649479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-scripts\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.650664 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.655883 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.658141 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-config-data\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.665356 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt2ff\" (UniqueName: \"kubernetes.io/projected/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-kube-api-access-zt2ff\") pod \"ceilometer-0\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " pod="openstack/ceilometer-0" Mar 19 17:06:44 crc kubenswrapper[4792]: I0319 17:06:44.888639 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.083823 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pm7qd"] Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.166897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"94c78995-4f1f-4eca-a3fb-df83caafa647","Type":"ContainerStarted","Data":"f7b6782b6eda27ebdc6e271d8602ee71fb87654128876225a5eef74b625bb9c9"} Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.166956 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"94c78995-4f1f-4eca-a3fb-df83caafa647","Type":"ContainerStarted","Data":"309be378d01e964b78157b8813968de85d62f3e572af19448f131677c79de8ac"} Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.175285 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pm7qd" event={"ID":"60077a8d-2acb-4c40-8052-f79382e6e373","Type":"ContainerStarted","Data":"32ff18bd9f2eeff659c27fd6098e901e201e041adda32fcba17b3698c7042a41"} Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.244309 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.244022566 podStartE2EDuration="15.244022566s" podCreationTimestamp="2026-03-19 17:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:45.234016151 +0000 UTC m=+1568.380073691" watchObservedRunningTime="2026-03-19 17:06:45.244022566 +0000 UTC m=+1568.390080106" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.338174 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.464241 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv6v8\" (UniqueName: \"kubernetes.io/projected/193d3d1f-e773-4b86-a176-ddb5c7727e39-kube-api-access-pv6v8\") pod \"193d3d1f-e773-4b86-a176-ddb5c7727e39\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.464569 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-config-data\") pod \"193d3d1f-e773-4b86-a176-ddb5c7727e39\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.464599 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-combined-ca-bundle\") pod \"193d3d1f-e773-4b86-a176-ddb5c7727e39\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.464645 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-db-sync-config-data\") pod \"193d3d1f-e773-4b86-a176-ddb5c7727e39\" (UID: \"193d3d1f-e773-4b86-a176-ddb5c7727e39\") " Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.480148 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "193d3d1f-e773-4b86-a176-ddb5c7727e39" (UID: "193d3d1f-e773-4b86-a176-ddb5c7727e39"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.487699 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193d3d1f-e773-4b86-a176-ddb5c7727e39-kube-api-access-pv6v8" (OuterVolumeSpecName: "kube-api-access-pv6v8") pod "193d3d1f-e773-4b86-a176-ddb5c7727e39" (UID: "193d3d1f-e773-4b86-a176-ddb5c7727e39"). InnerVolumeSpecName "kube-api-access-pv6v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.499297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "193d3d1f-e773-4b86-a176-ddb5c7727e39" (UID: "193d3d1f-e773-4b86-a176-ddb5c7727e39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.572219 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv6v8\" (UniqueName: \"kubernetes.io/projected/193d3d1f-e773-4b86-a176-ddb5c7727e39-kube-api-access-pv6v8\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.572243 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.572254 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.578446 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-config-data" (OuterVolumeSpecName: "config-data") pod "193d3d1f-e773-4b86-a176-ddb5c7727e39" (UID: "193d3d1f-e773-4b86-a176-ddb5c7727e39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.673806 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193d3d1f-e773-4b86-a176-ddb5c7727e39-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.794132 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2chz4"] Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.807771 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6ftwc"] Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.819887 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-r6f9z"] Mar 19 17:06:45 crc kubenswrapper[4792]: W0319 17:06:45.853636 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c29b201_cfdb_4e28_9752_722c118b6452.slice/crio-5e07b7c12eae3b9a78039ba1daa6b9157795d2b81bc4846ebd9317e8dcc1647c WatchSource:0}: Error finding container 5e07b7c12eae3b9a78039ba1daa6b9157795d2b81bc4846ebd9317e8dcc1647c: Status 404 returned error can't find the container with id 5e07b7c12eae3b9a78039ba1daa6b9157795d2b81bc4846ebd9317e8dcc1647c Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.853684 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-86jjn"] Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.867906 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kqqzt"] Mar 19 17:06:45 crc kubenswrapper[4792]: I0319 17:06:45.879655 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bfbgb"] Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.066255 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wm2lm"] Mar 19 17:06:46 crc kubenswrapper[4792]: W0319 17:06:46.155713 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c380bc3_72a1_4c70_b3b0_6f3ee2ecc373.slice/crio-9270f2a92bd4c1c18b99092f6c623f1580eff48588bbba81163399c1a2882500 WatchSource:0}: Error finding container 9270f2a92bd4c1c18b99092f6c623f1580eff48588bbba81163399c1a2882500: Status 404 returned error can't find the container with id 9270f2a92bd4c1c18b99092f6c623f1580eff48588bbba81163399c1a2882500 Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.186462 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.192294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373","Type":"ContainerStarted","Data":"9270f2a92bd4c1c18b99092f6c623f1580eff48588bbba81163399c1a2882500"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.193739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-86jjn" event={"ID":"567d324f-126d-4f06-91df-d2d84fd836f3","Type":"ContainerStarted","Data":"b1da40b21b70316e7af01c523d50d97e4b6417fdbaca03bfba9123bc0fbabc3b"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.195435 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" event={"ID":"7c29b201-cfdb-4e28-9752-722c118b6452","Type":"ContainerStarted","Data":"5e07b7c12eae3b9a78039ba1daa6b9157795d2b81bc4846ebd9317e8dcc1647c"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.199382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" event={"ID":"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c","Type":"ContainerStarted","Data":"dd20915955f72d7fba9e97ee56896ac695a3222d09311bf801a9f4c43b9c9f2c"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.199433 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" event={"ID":"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c","Type":"ContainerStarted","Data":"e19004c78dd7cb382072612e90fcc67bf6c66c7ce44a605fa23e8ada7b6da547"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.205581 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6ftwc" event={"ID":"ef634102-a683-498b-ad98-61d470f7fefa","Type":"ContainerStarted","Data":"15fa15635294600f48f347bbe7c8897e1bcdf7653fbfc9be1758cf9f2500fe16"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.212784 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2chz4" event={"ID":"13b09649-8b3c-4328-97b8-5c5c8d3e198b","Type":"ContainerStarted","Data":"ef5ed9526e3a9f2edb00d59d16e401ab9d770cdc8d505864ad15ad8f04b617a8"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.212813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2chz4" event={"ID":"13b09649-8b3c-4328-97b8-5c5c8d3e198b","Type":"ContainerStarted","Data":"39be2fbe6fcb21953ee9b46da8609c217c6a94a5aa0c1c820ccc377eca37713a"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.226101 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pm7qd" event={"ID":"60077a8d-2acb-4c40-8052-f79382e6e373","Type":"ContainerStarted","Data":"8634d390ea5e9b4e989d3ae467efae8b818c212c7d2d75bb7fa9478bd172fcc9"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.232701 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xbdtj" event={"ID":"193d3d1f-e773-4b86-a176-ddb5c7727e39","Type":"ContainerDied","Data":"dc71cf71a1fde23c8d04d544ef23b3ba51789f9643b8142128aea99aded05ead"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.232741 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc71cf71a1fde23c8d04d544ef23b3ba51789f9643b8142128aea99aded05ead" Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.232848 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xbdtj" Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.251094 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wm2lm" event={"ID":"03107c0e-b888-4df4-892a-daebb217a18e","Type":"ContainerStarted","Data":"c85e79a1869cd55ce77831fcd056b3003a015143da3f12548dc0d635197f00e1"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.264167 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r6f9z" event={"ID":"cdaaa799-71ff-429b-86fe-bbe4e903984f","Type":"ContainerStarted","Data":"2762ee6b355c644a9ed5e6ada87b53c714e25a53fb2e3576f7cbe06f8c62df76"} Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.295549 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2chz4" podStartSLOduration=3.295533711 podStartE2EDuration="3.295533711s" podCreationTimestamp="2026-03-19 17:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:46.29039134 +0000 UTC m=+1569.436448880" watchObservedRunningTime="2026-03-19 17:06:46.295533711 +0000 UTC m=+1569.441591251" Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.296338 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.296384 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.341169 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.346581 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pm7qd" podStartSLOduration=3.346562972 podStartE2EDuration="3.346562972s" podCreationTimestamp="2026-03-19 17:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:46.327299533 +0000 UTC m=+1569.473357073" watchObservedRunningTime="2026-03-19 17:06:46.346562972 +0000 UTC m=+1569.492620512" Mar 19 17:06:46 crc kubenswrapper[4792]: I0319 17:06:46.908050 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.110485 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bfbgb"] Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.143906 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-db8kg"] Mar 19 17:06:47 crc kubenswrapper[4792]: E0319 17:06:47.148772 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193d3d1f-e773-4b86-a176-ddb5c7727e39" containerName="glance-db-sync" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.148801 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="193d3d1f-e773-4b86-a176-ddb5c7727e39" containerName="glance-db-sync" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.149066 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="193d3d1f-e773-4b86-a176-ddb5c7727e39" containerName="glance-db-sync" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.150534 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.177991 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-db8kg"] Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.262912 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-config\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.264274 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.264415 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.264564 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.264784 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkccc\" (UniqueName: \"kubernetes.io/projected/a519499d-858b-46d9-81d6-22b3c58eceab-kube-api-access-tkccc\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.264939 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.307621 4792 generic.go:334] "Generic (PLEG): container finished" podID="f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" containerID="dd20915955f72d7fba9e97ee56896ac695a3222d09311bf801a9f4c43b9c9f2c" exitCode=0 Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.307718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" event={"ID":"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c","Type":"ContainerDied","Data":"dd20915955f72d7fba9e97ee56896ac695a3222d09311bf801a9f4c43b9c9f2c"} Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.337525 4792 generic.go:334] "Generic (PLEG): container finished" podID="7c29b201-cfdb-4e28-9752-722c118b6452" containerID="f90e1f46b135b26be72f3f5454e552e8ae78520bd39e0d1555e96c7122e50fab" exitCode=0 Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.338936 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" event={"ID":"7c29b201-cfdb-4e28-9752-722c118b6452","Type":"ContainerDied","Data":"f90e1f46b135b26be72f3f5454e552e8ae78520bd39e0d1555e96c7122e50fab"} Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.366768 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.366869 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-config\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.369039 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-config\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.373058 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.373141 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.373339 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.373458 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkccc\" (UniqueName: \"kubernetes.io/projected/a519499d-858b-46d9-81d6-22b3c58eceab-kube-api-access-tkccc\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.373514 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.374353 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.375079 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.375828 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.377012 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.402770 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkccc\" (UniqueName: \"kubernetes.io/projected/a519499d-858b-46d9-81d6-22b3c58eceab-kube-api-access-tkccc\") pod \"dnsmasq-dns-57c957c4ff-db8kg\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.522926 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:47 crc kubenswrapper[4792]: E0319 17:06:47.794123 4792 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 19 17:06:47 crc kubenswrapper[4792]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 19 17:06:47 crc kubenswrapper[4792]: > podSandboxID="e19004c78dd7cb382072612e90fcc67bf6c66c7ce44a605fa23e8ada7b6da547" Mar 19 17:06:47 crc kubenswrapper[4792]: E0319 17:06:47.794448 4792 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 19 17:06:47 crc kubenswrapper[4792]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n547h689h555h589h8bh66bh5ddh557h5f9h94h689h679hffh596h79h564h696h8fh67h8dh5fdh585h58bhfch8h5dch554h65bh549hc8h67ch55fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7th9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-fcfdd6f9f-bfbgb_openstack(f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 19 17:06:47 crc kubenswrapper[4792]: > logger="UnhandledError" Mar 19 17:06:47 crc kubenswrapper[4792]: E0319 17:06:47.796534 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" podUID="f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" Mar 19 17:06:47 crc kubenswrapper[4792]: I0319 17:06:47.925372 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:47.999362 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-swift-storage-0\") pod \"7c29b201-cfdb-4e28-9752-722c118b6452\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.000490 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-nb\") pod \"7c29b201-cfdb-4e28-9752-722c118b6452\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.000522 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-svc\") pod \"7c29b201-cfdb-4e28-9752-722c118b6452\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.000552 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ppbx\" (UniqueName: \"kubernetes.io/projected/7c29b201-cfdb-4e28-9752-722c118b6452-kube-api-access-6ppbx\") pod \"7c29b201-cfdb-4e28-9752-722c118b6452\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.000642 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-sb\") pod \"7c29b201-cfdb-4e28-9752-722c118b6452\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.000693 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-config\") pod \"7c29b201-cfdb-4e28-9752-722c118b6452\" (UID: \"7c29b201-cfdb-4e28-9752-722c118b6452\") " Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.010256 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.020104 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c29b201-cfdb-4e28-9752-722c118b6452-kube-api-access-6ppbx" (OuterVolumeSpecName: "kube-api-access-6ppbx") pod "7c29b201-cfdb-4e28-9752-722c118b6452" (UID: "7c29b201-cfdb-4e28-9752-722c118b6452"). InnerVolumeSpecName "kube-api-access-6ppbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:48 crc kubenswrapper[4792]: E0319 17:06:48.024231 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c29b201-cfdb-4e28-9752-722c118b6452" containerName="init" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.024354 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c29b201-cfdb-4e28-9752-722c118b6452" containerName="init" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.031213 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c29b201-cfdb-4e28-9752-722c118b6452" containerName="init" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.035481 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.040727 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.040776 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bcbrs" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.040902 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.042683 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.044647 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-config" (OuterVolumeSpecName: "config") pod "7c29b201-cfdb-4e28-9752-722c118b6452" (UID: "7c29b201-cfdb-4e28-9752-722c118b6452"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.049486 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c29b201-cfdb-4e28-9752-722c118b6452" (UID: "7c29b201-cfdb-4e28-9752-722c118b6452"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.066490 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c29b201-cfdb-4e28-9752-722c118b6452" (UID: "7c29b201-cfdb-4e28-9752-722c118b6452"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.069337 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c29b201-cfdb-4e28-9752-722c118b6452" (UID: "7c29b201-cfdb-4e28-9752-722c118b6452"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.110419 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.110460 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.110475 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.110488 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.110501 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ppbx\" (UniqueName: \"kubernetes.io/projected/7c29b201-cfdb-4e28-9752-722c118b6452-kube-api-access-6ppbx\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.111834 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c29b201-cfdb-4e28-9752-722c118b6452" (UID: "7c29b201-cfdb-4e28-9752-722c118b6452"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.217368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.217413 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-config-data\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.217448 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-scripts\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.217476 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.217577 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-logs\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.219554 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.219592 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4mb4\" (UniqueName: \"kubernetes.io/projected/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-kube-api-access-m4mb4\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.220366 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c29b201-cfdb-4e28-9752-722c118b6452-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.293922 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.296236 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.302229 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.308946 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.324744 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-logs\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.325079 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.325150 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4mb4\" (UniqueName: \"kubernetes.io/projected/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-kube-api-access-m4mb4\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.325443 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.325479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-config-data\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.325544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-scripts\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.325611 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.327190 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-logs\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.327464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.336534 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.336576 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4b43246ff1db308c2bef8dd59bebb849755d71eee7e8415d63550c78edf7118/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.337407 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-scripts\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.340603 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.341305 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-config-data\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.382666 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4mb4\" (UniqueName: \"kubernetes.io/projected/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-kube-api-access-m4mb4\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.391224 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.391211 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-kqqzt" event={"ID":"7c29b201-cfdb-4e28-9752-722c118b6452","Type":"ContainerDied","Data":"5e07b7c12eae3b9a78039ba1daa6b9157795d2b81bc4846ebd9317e8dcc1647c"} Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.391530 4792 scope.go:117] "RemoveContainer" containerID="f90e1f46b135b26be72f3f5454e552e8ae78520bd39e0d1555e96c7122e50fab" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.393060 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-db8kg"] Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.414699 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.427243 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.427301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.427336 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.427358 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.427416 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhq6h\" (UniqueName: \"kubernetes.io/projected/e260e381-c628-4cec-8558-def696f354f3-kube-api-access-fhq6h\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.427444 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.427465 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.530141 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.530206 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.530264 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.530288 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.530417 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhq6h\" (UniqueName: \"kubernetes.io/projected/e260e381-c628-4cec-8558-def696f354f3-kube-api-access-fhq6h\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.530477 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.530510 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.532166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.532587 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.536946 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.537987 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.538014 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46aca6488ccc0c45ce72b5d671de4e2f1b9d18700c0b7fecdb3a92995140584f/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.539018 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.564015 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.566350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhq6h\" (UniqueName: \"kubernetes.io/projected/e260e381-c628-4cec-8558-def696f354f3-kube-api-access-fhq6h\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.657722 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.671481 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.673928 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.747070 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kqqzt"] Mar 19 17:06:48 crc kubenswrapper[4792]: I0319 17:06:48.859392 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kqqzt"] Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.028543 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.152706 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-nb\") pod \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.153074 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-swift-storage-0\") pod \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.153144 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-svc\") pod \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.153207 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7th9t\" (UniqueName: \"kubernetes.io/projected/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-kube-api-access-7th9t\") pod \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.153238 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-config\") pod \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.153256 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-sb\") pod \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\" (UID: \"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c\") " Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.233798 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-kube-api-access-7th9t" (OuterVolumeSpecName: "kube-api-access-7th9t") pod "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" (UID: "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c"). InnerVolumeSpecName "kube-api-access-7th9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.260776 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7th9t\" (UniqueName: \"kubernetes.io/projected/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-kube-api-access-7th9t\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.405712 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" (UID: "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.468632 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.480529 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-config" (OuterVolumeSpecName: "config") pod "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" (UID: "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.487798 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" event={"ID":"a519499d-858b-46d9-81d6-22b3c58eceab","Type":"ContainerStarted","Data":"18a4ae80c54b60a5e4df91ad9048a1ef4644b55a157b0924f4f824ce14ae2472"} Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.497027 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.497062 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bfbgb" event={"ID":"f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c","Type":"ContainerDied","Data":"e19004c78dd7cb382072612e90fcc67bf6c66c7ce44a605fa23e8ada7b6da547"} Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.497122 4792 scope.go:117] "RemoveContainer" containerID="dd20915955f72d7fba9e97ee56896ac695a3222d09311bf801a9f4c43b9c9f2c" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.498344 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" (UID: "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.517193 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" (UID: "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.524256 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" (UID: "f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.572212 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.572334 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.572360 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.572371 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.774791 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c29b201-cfdb-4e28-9752-722c118b6452" path="/var/lib/kubelet/pods/7c29b201-cfdb-4e28-9752-722c118b6452/volumes" Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.895380 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.922992 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bfbgb"] Mar 19 17:06:49 crc kubenswrapper[4792]: I0319 17:06:49.934615 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bfbgb"] Mar 19 17:06:50 crc kubenswrapper[4792]: I0319 17:06:50.010766 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:06:50 crc kubenswrapper[4792]: I0319 17:06:50.230392 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:06:50 crc kubenswrapper[4792]: I0319 17:06:50.230776 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:06:50 crc kubenswrapper[4792]: I0319 17:06:50.545672 4792 generic.go:334] "Generic (PLEG): container finished" podID="a519499d-858b-46d9-81d6-22b3c58eceab" containerID="fb820acd1f991bc2cf37cf27892e4772a7a78f29e750504399be7a685b8fb10e" exitCode=0 Mar 19 17:06:50 crc kubenswrapper[4792]: I0319 17:06:50.545766 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" event={"ID":"a519499d-858b-46d9-81d6-22b3c58eceab","Type":"ContainerDied","Data":"fb820acd1f991bc2cf37cf27892e4772a7a78f29e750504399be7a685b8fb10e"} Mar 19 17:06:50 crc kubenswrapper[4792]: I0319 17:06:50.584091 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b48702a8-0e3e-4776-9ee8-ef674e38fe1a","Type":"ContainerStarted","Data":"374473baf85078e655b0e91c062816f2cb64ed793bc43d0bab786f546084566f"} Mar 19 17:06:50 crc kubenswrapper[4792]: I0319 17:06:50.590929 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e260e381-c628-4cec-8558-def696f354f3","Type":"ContainerStarted","Data":"63bbcb51c836088a8869f616fa4f5546d2846ed28dc7eaa71955864b4f5e20ed"} Mar 19 17:06:51 crc kubenswrapper[4792]: I0319 17:06:51.619688 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" event={"ID":"a519499d-858b-46d9-81d6-22b3c58eceab","Type":"ContainerStarted","Data":"909a160e85928f77c1041e55b6656ff30300bc14545f7cc3172eb1b63e289a98"} Mar 19 17:06:51 crc kubenswrapper[4792]: I0319 17:06:51.620256 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:51 crc kubenswrapper[4792]: I0319 17:06:51.632458 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b48702a8-0e3e-4776-9ee8-ef674e38fe1a","Type":"ContainerStarted","Data":"c20386577796509755400ac4213ad52c6a7c543609287288985d21c03aaf24c4"} Mar 19 17:06:51 crc kubenswrapper[4792]: I0319 17:06:51.643267 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e260e381-c628-4cec-8558-def696f354f3","Type":"ContainerStarted","Data":"013be36e5585350f2f487280382e2d06168d672d18def84e214389c9f6fbc1b5"} Mar 19 17:06:51 crc kubenswrapper[4792]: I0319 17:06:51.645479 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" podStartSLOduration=4.645460425 podStartE2EDuration="4.645460425s" podCreationTimestamp="2026-03-19 17:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:51.637765513 +0000 UTC m=+1574.783823043" watchObservedRunningTime="2026-03-19 17:06:51.645460425 +0000 UTC m=+1574.791517965" Mar 19 17:06:51 crc kubenswrapper[4792]: I0319 17:06:51.773860 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" path="/var/lib/kubelet/pods/f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c/volumes" Mar 19 17:06:52 crc kubenswrapper[4792]: I0319 17:06:52.656124 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b48702a8-0e3e-4776-9ee8-ef674e38fe1a","Type":"ContainerStarted","Data":"b08dd6de325c6e6f80317869a48d39e3cd3273dff681a2fe6a8721d3cbe35fef"} Mar 19 17:06:52 crc kubenswrapper[4792]: I0319 17:06:52.661276 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e260e381-c628-4cec-8558-def696f354f3","Type":"ContainerStarted","Data":"3cc72b4dbf0f70cb94a1093228ce568bf5fdfde3ed33ad4e5329d14067d002b7"} Mar 19 17:06:52 crc kubenswrapper[4792]: I0319 17:06:52.668445 4792 generic.go:334] "Generic (PLEG): container finished" podID="60077a8d-2acb-4c40-8052-f79382e6e373" containerID="8634d390ea5e9b4e989d3ae467efae8b818c212c7d2d75bb7fa9478bd172fcc9" exitCode=0 Mar 19 17:06:52 crc kubenswrapper[4792]: I0319 17:06:52.669620 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pm7qd" event={"ID":"60077a8d-2acb-4c40-8052-f79382e6e373","Type":"ContainerDied","Data":"8634d390ea5e9b4e989d3ae467efae8b818c212c7d2d75bb7fa9478bd172fcc9"} Mar 19 17:06:52 crc kubenswrapper[4792]: I0319 17:06:52.693720 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.69369948 podStartE2EDuration="6.69369948s" podCreationTimestamp="2026-03-19 17:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:52.683441359 +0000 UTC m=+1575.829498909" watchObservedRunningTime="2026-03-19 17:06:52.69369948 +0000 UTC m=+1575.839757020" Mar 19 17:06:52 crc kubenswrapper[4792]: I0319 17:06:52.735571 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.735550879 podStartE2EDuration="5.735550879s" podCreationTimestamp="2026-03-19 17:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:06:52.725609326 +0000 UTC m=+1575.871666866" watchObservedRunningTime="2026-03-19 17:06:52.735550879 +0000 UTC m=+1575.881608419" Mar 19 17:06:53 crc kubenswrapper[4792]: I0319 17:06:53.547360 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jg2vl" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" probeResult="failure" output=< Mar 19 17:06:53 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:06:53 crc kubenswrapper[4792]: > Mar 19 17:06:54 crc kubenswrapper[4792]: I0319 17:06:54.292725 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:06:54 crc kubenswrapper[4792]: I0319 17:06:54.368439 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:06:54 crc kubenswrapper[4792]: I0319 17:06:54.694910 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e260e381-c628-4cec-8558-def696f354f3" containerName="glance-log" containerID="cri-o://013be36e5585350f2f487280382e2d06168d672d18def84e214389c9f6fbc1b5" gracePeriod=30 Mar 19 17:06:54 crc kubenswrapper[4792]: I0319 17:06:54.695439 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerName="glance-log" containerID="cri-o://c20386577796509755400ac4213ad52c6a7c543609287288985d21c03aaf24c4" gracePeriod=30 Mar 19 17:06:54 crc kubenswrapper[4792]: I0319 17:06:54.695521 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e260e381-c628-4cec-8558-def696f354f3" containerName="glance-httpd" containerID="cri-o://3cc72b4dbf0f70cb94a1093228ce568bf5fdfde3ed33ad4e5329d14067d002b7" gracePeriod=30 Mar 19 17:06:54 crc kubenswrapper[4792]: I0319 17:06:54.695724 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerName="glance-httpd" containerID="cri-o://b08dd6de325c6e6f80317869a48d39e3cd3273dff681a2fe6a8721d3cbe35fef" gracePeriod=30 Mar 19 17:06:55 crc kubenswrapper[4792]: I0319 17:06:55.711630 4792 generic.go:334] "Generic (PLEG): container finished" podID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerID="b08dd6de325c6e6f80317869a48d39e3cd3273dff681a2fe6a8721d3cbe35fef" exitCode=0 Mar 19 17:06:55 crc kubenswrapper[4792]: I0319 17:06:55.711868 4792 generic.go:334] "Generic (PLEG): container finished" podID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerID="c20386577796509755400ac4213ad52c6a7c543609287288985d21c03aaf24c4" exitCode=143 Mar 19 17:06:55 crc kubenswrapper[4792]: I0319 17:06:55.711907 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b48702a8-0e3e-4776-9ee8-ef674e38fe1a","Type":"ContainerDied","Data":"b08dd6de325c6e6f80317869a48d39e3cd3273dff681a2fe6a8721d3cbe35fef"} Mar 19 17:06:55 crc kubenswrapper[4792]: I0319 17:06:55.711930 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b48702a8-0e3e-4776-9ee8-ef674e38fe1a","Type":"ContainerDied","Data":"c20386577796509755400ac4213ad52c6a7c543609287288985d21c03aaf24c4"} Mar 19 17:06:55 crc kubenswrapper[4792]: I0319 17:06:55.715790 4792 generic.go:334] "Generic (PLEG): container finished" podID="e260e381-c628-4cec-8558-def696f354f3" containerID="3cc72b4dbf0f70cb94a1093228ce568bf5fdfde3ed33ad4e5329d14067d002b7" exitCode=0 Mar 19 17:06:55 crc kubenswrapper[4792]: I0319 17:06:55.715826 4792 generic.go:334] "Generic (PLEG): container finished" podID="e260e381-c628-4cec-8558-def696f354f3" containerID="013be36e5585350f2f487280382e2d06168d672d18def84e214389c9f6fbc1b5" exitCode=143 Mar 19 17:06:55 crc kubenswrapper[4792]: I0319 17:06:55.715829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e260e381-c628-4cec-8558-def696f354f3","Type":"ContainerDied","Data":"3cc72b4dbf0f70cb94a1093228ce568bf5fdfde3ed33ad4e5329d14067d002b7"} Mar 19 17:06:55 crc kubenswrapper[4792]: I0319 17:06:55.715882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e260e381-c628-4cec-8558-def696f354f3","Type":"ContainerDied","Data":"013be36e5585350f2f487280382e2d06168d672d18def84e214389c9f6fbc1b5"} Mar 19 17:06:57 crc kubenswrapper[4792]: I0319 17:06:57.524592 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:06:57 crc kubenswrapper[4792]: I0319 17:06:57.585971 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-9hdvh"] Mar 19 17:06:57 crc kubenswrapper[4792]: I0319 17:06:57.586200 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="dnsmasq-dns" containerID="cri-o://60ddb890d154ff48a174160f5f3b5fcf80cf9064efc8264d8e878ee3eaf94c9f" gracePeriod=10 Mar 19 17:06:58 crc kubenswrapper[4792]: I0319 17:06:58.755693 4792 generic.go:334] "Generic (PLEG): container finished" podID="76bc43a0-2615-470c-8719-f2081f6ce044" containerID="60ddb890d154ff48a174160f5f3b5fcf80cf9064efc8264d8e878ee3eaf94c9f" exitCode=0 Mar 19 17:06:58 crc kubenswrapper[4792]: I0319 17:06:58.755741 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" event={"ID":"76bc43a0-2615-470c-8719-f2081f6ce044","Type":"ContainerDied","Data":"60ddb890d154ff48a174160f5f3b5fcf80cf9064efc8264d8e878ee3eaf94c9f"} Mar 19 17:07:00 crc kubenswrapper[4792]: I0319 17:07:00.091809 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: connect: connection refused" Mar 19 17:07:03 crc kubenswrapper[4792]: I0319 17:07:03.582765 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jg2vl" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" probeResult="failure" output=< Mar 19 17:07:03 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:07:03 crc kubenswrapper[4792]: > Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.014609 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.113886 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-combined-ca-bundle\") pod \"60077a8d-2acb-4c40-8052-f79382e6e373\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.114050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkx78\" (UniqueName: \"kubernetes.io/projected/60077a8d-2acb-4c40-8052-f79382e6e373-kube-api-access-tkx78\") pod \"60077a8d-2acb-4c40-8052-f79382e6e373\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.114078 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-config-data\") pod \"60077a8d-2acb-4c40-8052-f79382e6e373\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.114127 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-scripts\") pod \"60077a8d-2acb-4c40-8052-f79382e6e373\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.114189 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-fernet-keys\") pod \"60077a8d-2acb-4c40-8052-f79382e6e373\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.114263 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-credential-keys\") pod \"60077a8d-2acb-4c40-8052-f79382e6e373\" (UID: \"60077a8d-2acb-4c40-8052-f79382e6e373\") " Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.123103 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60077a8d-2acb-4c40-8052-f79382e6e373-kube-api-access-tkx78" (OuterVolumeSpecName: "kube-api-access-tkx78") pod "60077a8d-2acb-4c40-8052-f79382e6e373" (UID: "60077a8d-2acb-4c40-8052-f79382e6e373"). InnerVolumeSpecName "kube-api-access-tkx78". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.126203 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "60077a8d-2acb-4c40-8052-f79382e6e373" (UID: "60077a8d-2acb-4c40-8052-f79382e6e373"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.155204 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "60077a8d-2acb-4c40-8052-f79382e6e373" (UID: "60077a8d-2acb-4c40-8052-f79382e6e373"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.162192 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-config-data" (OuterVolumeSpecName: "config-data") pod "60077a8d-2acb-4c40-8052-f79382e6e373" (UID: "60077a8d-2acb-4c40-8052-f79382e6e373"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.169602 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-scripts" (OuterVolumeSpecName: "scripts") pod "60077a8d-2acb-4c40-8052-f79382e6e373" (UID: "60077a8d-2acb-4c40-8052-f79382e6e373"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.183448 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60077a8d-2acb-4c40-8052-f79382e6e373" (UID: "60077a8d-2acb-4c40-8052-f79382e6e373"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.218428 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.218464 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkx78\" (UniqueName: \"kubernetes.io/projected/60077a8d-2acb-4c40-8052-f79382e6e373-kube-api-access-tkx78\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.218478 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.218486 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.218495 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.218506 4792 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60077a8d-2acb-4c40-8052-f79382e6e373-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.835510 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pm7qd" event={"ID":"60077a8d-2acb-4c40-8052-f79382e6e373","Type":"ContainerDied","Data":"32ff18bd9f2eeff659c27fd6098e901e201e041adda32fcba17b3698c7042a41"} Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.836132 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pm7qd" Mar 19 17:07:05 crc kubenswrapper[4792]: I0319 17:07:05.836179 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ff18bd9f2eeff659c27fd6098e901e201e041adda32fcba17b3698c7042a41" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.153690 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pm7qd"] Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.163030 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pm7qd"] Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.226093 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h8xdp"] Mar 19 17:07:06 crc kubenswrapper[4792]: E0319 17:07:06.226924 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" containerName="init" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.226942 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" containerName="init" Mar 19 17:07:06 crc kubenswrapper[4792]: E0319 17:07:06.227088 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60077a8d-2acb-4c40-8052-f79382e6e373" containerName="keystone-bootstrap" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.227102 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="60077a8d-2acb-4c40-8052-f79382e6e373" containerName="keystone-bootstrap" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.227413 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="60077a8d-2acb-4c40-8052-f79382e6e373" containerName="keystone-bootstrap" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.227510 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0300c3a-d877-4fb7-ad4b-e40ee4ed5f8c" containerName="init" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.228850 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.232359 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.232566 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrkzh" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.232663 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.232689 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.232690 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.235542 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h8xdp"] Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.344950 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-fernet-keys\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.345008 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-config-data\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.345036 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-scripts\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.345088 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9fx\" (UniqueName: \"kubernetes.io/projected/398bc201-2c6c-4434-ad7a-208f048b9f5c-kube-api-access-7l9fx\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.345211 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-combined-ca-bundle\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.345240 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-credential-keys\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.447546 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-combined-ca-bundle\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.447611 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-credential-keys\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.447676 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-fernet-keys\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.447714 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-config-data\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.447747 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-scripts\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.447810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l9fx\" (UniqueName: \"kubernetes.io/projected/398bc201-2c6c-4434-ad7a-208f048b9f5c-kube-api-access-7l9fx\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.454956 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-fernet-keys\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.455534 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-credential-keys\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.465415 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-config-data\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.466217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-scripts\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.467659 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-combined-ca-bundle\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.470942 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l9fx\" (UniqueName: \"kubernetes.io/projected/398bc201-2c6c-4434-ad7a-208f048b9f5c-kube-api-access-7l9fx\") pod \"keystone-bootstrap-h8xdp\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:06 crc kubenswrapper[4792]: I0319 17:07:06.586377 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:07 crc kubenswrapper[4792]: I0319 17:07:07.765532 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60077a8d-2acb-4c40-8052-f79382e6e373" path="/var/lib/kubelet/pods/60077a8d-2acb-4c40-8052-f79382e6e373/volumes" Mar 19 17:07:10 crc kubenswrapper[4792]: I0319 17:07:10.091926 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Mar 19 17:07:12 crc kubenswrapper[4792]: I0319 17:07:12.534440 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:07:12 crc kubenswrapper[4792]: I0319 17:07:12.584809 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:07:12 crc kubenswrapper[4792]: I0319 17:07:12.773502 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jg2vl"] Mar 19 17:07:13 crc kubenswrapper[4792]: I0319 17:07:13.929605 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jg2vl" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" containerID="cri-o://f6689d53d79b58bd5d06a5116d45e9cb0e04b0b879c5ba5af5408de1802ca920" gracePeriod=2 Mar 19 17:07:14 crc kubenswrapper[4792]: I0319 17:07:14.943478 4792 generic.go:334] "Generic (PLEG): container finished" podID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerID="f6689d53d79b58bd5d06a5116d45e9cb0e04b0b879c5ba5af5408de1802ca920" exitCode=0 Mar 19 17:07:14 crc kubenswrapper[4792]: I0319 17:07:14.943578 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg2vl" event={"ID":"5d5868a6-fe98-44f9-908f-a5c9335098b1","Type":"ContainerDied","Data":"f6689d53d79b58bd5d06a5116d45e9cb0e04b0b879c5ba5af5408de1802ca920"} Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.093031 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.093294 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:07:15 crc kubenswrapper[4792]: E0319 17:07:15.254964 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Mar 19 17:07:15 crc kubenswrapper[4792]: E0319 17:07:15.255159 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59hbbh5b6h685h8h54dh64bh79h9h54h8h675hd7h67bhd9h5fbh686h658h6bh677h55fhbch5fbh55ch575h5bfh576h65dh564h656h5f8hdcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt2ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:07:15 crc kubenswrapper[4792]: E0319 17:07:15.488818 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Mar 19 17:07:15 crc kubenswrapper[4792]: E0319 17:07:15.489020 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9gsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-r6f9z_openstack(cdaaa799-71ff-429b-86fe-bbe4e903984f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:07:15 crc kubenswrapper[4792]: E0319 17:07:15.490394 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-r6f9z" podUID="cdaaa799-71ff-429b-86fe-bbe4e903984f" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.652022 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.658973 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.706717 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.779936 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-scripts\") pod \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780067 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhq6h\" (UniqueName: \"kubernetes.io/projected/e260e381-c628-4cec-8558-def696f354f3-kube-api-access-fhq6h\") pod \"e260e381-c628-4cec-8558-def696f354f3\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780139 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t7vz\" (UniqueName: \"kubernetes.io/projected/76bc43a0-2615-470c-8719-f2081f6ce044-kube-api-access-9t7vz\") pod \"76bc43a0-2615-470c-8719-f2081f6ce044\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780160 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-combined-ca-bundle\") pod \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780191 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-httpd-run\") pod \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780211 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-logs\") pod \"e260e381-c628-4cec-8558-def696f354f3\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780250 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-scripts\") pod \"e260e381-c628-4cec-8558-def696f354f3\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780274 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-config\") pod \"76bc43a0-2615-470c-8719-f2081f6ce044\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780316 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-httpd-run\") pod \"e260e381-c628-4cec-8558-def696f354f3\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780357 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-sb\") pod \"76bc43a0-2615-470c-8719-f2081f6ce044\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780372 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-nb\") pod \"76bc43a0-2615-470c-8719-f2081f6ce044\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780400 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-logs\") pod \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780545 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"e260e381-c628-4cec-8558-def696f354f3\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780615 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780655 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-config-data\") pod \"e260e381-c628-4cec-8558-def696f354f3\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780679 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-svc\") pod \"76bc43a0-2615-470c-8719-f2081f6ce044\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780704 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-config-data\") pod \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780738 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-swift-storage-0\") pod \"76bc43a0-2615-470c-8719-f2081f6ce044\" (UID: \"76bc43a0-2615-470c-8719-f2081f6ce044\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780777 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-combined-ca-bundle\") pod \"e260e381-c628-4cec-8558-def696f354f3\" (UID: \"e260e381-c628-4cec-8558-def696f354f3\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.780801 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4mb4\" (UniqueName: \"kubernetes.io/projected/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-kube-api-access-m4mb4\") pod \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\" (UID: \"b48702a8-0e3e-4776-9ee8-ef674e38fe1a\") " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.782149 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b48702a8-0e3e-4776-9ee8-ef674e38fe1a" (UID: "b48702a8-0e3e-4776-9ee8-ef674e38fe1a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.786305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-scripts" (OuterVolumeSpecName: "scripts") pod "b48702a8-0e3e-4776-9ee8-ef674e38fe1a" (UID: "b48702a8-0e3e-4776-9ee8-ef674e38fe1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.789855 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e260e381-c628-4cec-8558-def696f354f3" (UID: "e260e381-c628-4cec-8558-def696f354f3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.790547 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-logs" (OuterVolumeSpecName: "logs") pod "b48702a8-0e3e-4776-9ee8-ef674e38fe1a" (UID: "b48702a8-0e3e-4776-9ee8-ef674e38fe1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.790785 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-kube-api-access-m4mb4" (OuterVolumeSpecName: "kube-api-access-m4mb4") pod "b48702a8-0e3e-4776-9ee8-ef674e38fe1a" (UID: "b48702a8-0e3e-4776-9ee8-ef674e38fe1a"). InnerVolumeSpecName "kube-api-access-m4mb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.791061 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-logs" (OuterVolumeSpecName: "logs") pod "e260e381-c628-4cec-8558-def696f354f3" (UID: "e260e381-c628-4cec-8558-def696f354f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.805002 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-scripts" (OuterVolumeSpecName: "scripts") pod "e260e381-c628-4cec-8558-def696f354f3" (UID: "e260e381-c628-4cec-8558-def696f354f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.805214 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76bc43a0-2615-470c-8719-f2081f6ce044-kube-api-access-9t7vz" (OuterVolumeSpecName: "kube-api-access-9t7vz") pod "76bc43a0-2615-470c-8719-f2081f6ce044" (UID: "76bc43a0-2615-470c-8719-f2081f6ce044"). InnerVolumeSpecName "kube-api-access-9t7vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.837591 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e260e381-c628-4cec-8558-def696f354f3-kube-api-access-fhq6h" (OuterVolumeSpecName: "kube-api-access-fhq6h") pod "e260e381-c628-4cec-8558-def696f354f3" (UID: "e260e381-c628-4cec-8558-def696f354f3"). InnerVolumeSpecName "kube-api-access-fhq6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.837625 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808" (OuterVolumeSpecName: "glance") pod "b48702a8-0e3e-4776-9ee8-ef674e38fe1a" (UID: "b48702a8-0e3e-4776-9ee8-ef674e38fe1a"). InnerVolumeSpecName "pvc-f406fea9-b42c-4c85-920d-4d104deeb808". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.842239 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76bc43a0-2615-470c-8719-f2081f6ce044" (UID: "76bc43a0-2615-470c-8719-f2081f6ce044"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.851190 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82" (OuterVolumeSpecName: "glance") pod "e260e381-c628-4cec-8558-def696f354f3" (UID: "e260e381-c628-4cec-8558-def696f354f3"). InnerVolumeSpecName "pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.869687 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-config-data" (OuterVolumeSpecName: "config-data") pod "b48702a8-0e3e-4776-9ee8-ef674e38fe1a" (UID: "b48702a8-0e3e-4776-9ee8-ef674e38fe1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.880260 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76bc43a0-2615-470c-8719-f2081f6ce044" (UID: "76bc43a0-2615-470c-8719-f2081f6ce044"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.883680 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.883948 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.884026 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.884575 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e260e381-c628-4cec-8558-def696f354f3-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.884664 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.884758 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.884898 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") on node \"crc\" " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.884992 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") on node \"crc\" " Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.885049 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76bc43a0-2615-470c-8719-f2081f6ce044" (UID: "76bc43a0-2615-470c-8719-f2081f6ce044"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.885071 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.885144 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.885165 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4mb4\" (UniqueName: \"kubernetes.io/projected/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-kube-api-access-m4mb4\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.885178 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.885193 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhq6h\" (UniqueName: \"kubernetes.io/projected/e260e381-c628-4cec-8558-def696f354f3-kube-api-access-fhq6h\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.885204 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t7vz\" (UniqueName: \"kubernetes.io/projected/76bc43a0-2615-470c-8719-f2081f6ce044-kube-api-access-9t7vz\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.895686 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-config" (OuterVolumeSpecName: "config") pod "76bc43a0-2615-470c-8719-f2081f6ce044" (UID: "76bc43a0-2615-470c-8719-f2081f6ce044"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.897796 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-config-data" (OuterVolumeSpecName: "config-data") pod "e260e381-c628-4cec-8558-def696f354f3" (UID: "e260e381-c628-4cec-8558-def696f354f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.908879 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e260e381-c628-4cec-8558-def696f354f3" (UID: "e260e381-c628-4cec-8558-def696f354f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.911978 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b48702a8-0e3e-4776-9ee8-ef674e38fe1a" (UID: "b48702a8-0e3e-4776-9ee8-ef674e38fe1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.921398 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.921756 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f406fea9-b42c-4c85-920d-4d104deeb808" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808") on node "crc" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.925113 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.925529 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82") on node "crc" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.930825 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76bc43a0-2615-470c-8719-f2081f6ce044" (UID: "76bc43a0-2615-470c-8719-f2081f6ce044"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.965308 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.965302 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e260e381-c628-4cec-8558-def696f354f3","Type":"ContainerDied","Data":"63bbcb51c836088a8869f616fa4f5546d2846ed28dc7eaa71955864b4f5e20ed"} Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.965454 4792 scope.go:117] "RemoveContainer" containerID="3cc72b4dbf0f70cb94a1093228ce568bf5fdfde3ed33ad4e5329d14067d002b7" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.969817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" event={"ID":"76bc43a0-2615-470c-8719-f2081f6ce044","Type":"ContainerDied","Data":"c8ec5954dcc116d4f81c11fb3540632a7ab04e22f7bfd22ac7931a71ae422212"} Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.970533 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.974020 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b48702a8-0e3e-4776-9ee8-ef674e38fe1a","Type":"ContainerDied","Data":"374473baf85078e655b0e91c062816f2cb64ed793bc43d0bab786f546084566f"} Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.974178 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:07:15 crc kubenswrapper[4792]: E0319 17:07:15.975737 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-r6f9z" podUID="cdaaa799-71ff-429b-86fe-bbe4e903984f" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.986775 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.986804 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.986815 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.986826 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e260e381-c628-4cec-8558-def696f354f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.986852 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48702a8-0e3e-4776-9ee8-ef674e38fe1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.986861 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.986869 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76bc43a0-2615-470c-8719-f2081f6ce044-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:15 crc kubenswrapper[4792]: I0319 17:07:15.986878 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.023372 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.032397 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.057529 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-9hdvh"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.069984 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-9hdvh"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.078585 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:07:16 crc kubenswrapper[4792]: E0319 17:07:16.079038 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="init" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079056 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="init" Mar 19 17:07:16 crc kubenswrapper[4792]: E0319 17:07:16.079076 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="dnsmasq-dns" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079083 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="dnsmasq-dns" Mar 19 17:07:16 crc kubenswrapper[4792]: E0319 17:07:16.079108 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerName="glance-log" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079114 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerName="glance-log" Mar 19 17:07:16 crc kubenswrapper[4792]: E0319 17:07:16.079128 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e260e381-c628-4cec-8558-def696f354f3" containerName="glance-log" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079134 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e260e381-c628-4cec-8558-def696f354f3" containerName="glance-log" Mar 19 17:07:16 crc kubenswrapper[4792]: E0319 17:07:16.079149 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerName="glance-httpd" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079156 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerName="glance-httpd" Mar 19 17:07:16 crc kubenswrapper[4792]: E0319 17:07:16.079167 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e260e381-c628-4cec-8558-def696f354f3" containerName="glance-httpd" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079173 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e260e381-c628-4cec-8558-def696f354f3" containerName="glance-httpd" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079354 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e260e381-c628-4cec-8558-def696f354f3" containerName="glance-log" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079367 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerName="glance-httpd" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079383 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e260e381-c628-4cec-8558-def696f354f3" containerName="glance-httpd" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079395 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="dnsmasq-dns" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.079408 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" containerName="glance-log" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.080547 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.084821 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.085019 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.085179 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bcbrs" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.085304 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.100488 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.118989 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.132894 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.141997 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.144496 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.149548 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.151075 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.152039 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.190932 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191294 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191355 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191395 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-logs\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191435 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mr9\" (UniqueName: \"kubernetes.io/projected/f69ff164-0421-4131-92f1-88b1dbbac7d3-kube-api-access-z5mr9\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191468 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191588 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191613 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191631 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191672 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-logs\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191689 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191726 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191745 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjbtt\" (UniqueName: \"kubernetes.io/projected/da737c96-aa96-4a26-8fe5-33778519b02d-kube-api-access-cjbtt\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.191782 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293390 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293488 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293513 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293557 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293594 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-logs\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293613 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293640 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293661 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjbtt\" (UniqueName: \"kubernetes.io/projected/da737c96-aa96-4a26-8fe5-33778519b02d-kube-api-access-cjbtt\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293695 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293728 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293750 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293819 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293855 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-logs\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.293875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mr9\" (UniqueName: \"kubernetes.io/projected/f69ff164-0421-4131-92f1-88b1dbbac7d3-kube-api-access-z5mr9\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.294620 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.294921 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.296525 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-logs\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.296942 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-logs\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.300058 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.300623 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.301328 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.302135 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.302175 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46aca6488ccc0c45ce72b5d671de4e2f1b9d18700c0b7fecdb3a92995140584f/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.302323 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.302393 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4b43246ff1db308c2bef8dd59bebb849755d71eee7e8415d63550c78edf7118/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.303396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.304171 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.305460 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.307448 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.307768 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.317251 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mr9\" (UniqueName: \"kubernetes.io/projected/f69ff164-0421-4131-92f1-88b1dbbac7d3-kube-api-access-z5mr9\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.317251 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjbtt\" (UniqueName: \"kubernetes.io/projected/da737c96-aa96-4a26-8fe5-33778519b02d-kube-api-access-cjbtt\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.353068 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.354148 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.405377 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.461420 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.985767 4792 generic.go:334] "Generic (PLEG): container finished" podID="13b09649-8b3c-4328-97b8-5c5c8d3e198b" containerID="ef5ed9526e3a9f2edb00d59d16e401ab9d770cdc8d505864ad15ad8f04b617a8" exitCode=0 Mar 19 17:07:16 crc kubenswrapper[4792]: I0319 17:07:16.985817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2chz4" event={"ID":"13b09649-8b3c-4328-97b8-5c5c8d3e198b","Type":"ContainerDied","Data":"ef5ed9526e3a9f2edb00d59d16e401ab9d770cdc8d505864ad15ad8f04b617a8"} Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.275508 4792 scope.go:117] "RemoveContainer" containerID="013be36e5585350f2f487280382e2d06168d672d18def84e214389c9f6fbc1b5" Mar 19 17:07:17 crc kubenswrapper[4792]: E0319 17:07:17.275541 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 19 17:07:17 crc kubenswrapper[4792]: E0319 17:07:17.276205 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjwlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6ftwc_openstack(ef634102-a683-498b-ad98-61d470f7fefa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:07:17 crc kubenswrapper[4792]: E0319 17:07:17.277295 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6ftwc" podUID="ef634102-a683-498b-ad98-61d470f7fefa" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.532826 4792 scope.go:117] "RemoveContainer" containerID="60ddb890d154ff48a174160f5f3b5fcf80cf9064efc8264d8e878ee3eaf94c9f" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.562781 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.576527 4792 scope.go:117] "RemoveContainer" containerID="9c27ac9f96f277b848d308480571893f717237453ab72aeaf62e31b0f6783d66" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.617900 4792 scope.go:117] "RemoveContainer" containerID="b08dd6de325c6e6f80317869a48d39e3cd3273dff681a2fe6a8721d3cbe35fef" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.620505 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-utilities\") pod \"5d5868a6-fe98-44f9-908f-a5c9335098b1\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.620571 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vw6s\" (UniqueName: \"kubernetes.io/projected/5d5868a6-fe98-44f9-908f-a5c9335098b1-kube-api-access-9vw6s\") pod \"5d5868a6-fe98-44f9-908f-a5c9335098b1\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.620641 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-catalog-content\") pod \"5d5868a6-fe98-44f9-908f-a5c9335098b1\" (UID: \"5d5868a6-fe98-44f9-908f-a5c9335098b1\") " Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.621458 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-utilities" (OuterVolumeSpecName: "utilities") pod "5d5868a6-fe98-44f9-908f-a5c9335098b1" (UID: "5d5868a6-fe98-44f9-908f-a5c9335098b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.629975 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5868a6-fe98-44f9-908f-a5c9335098b1-kube-api-access-9vw6s" (OuterVolumeSpecName: "kube-api-access-9vw6s") pod "5d5868a6-fe98-44f9-908f-a5c9335098b1" (UID: "5d5868a6-fe98-44f9-908f-a5c9335098b1"). InnerVolumeSpecName "kube-api-access-9vw6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.723699 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.723737 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vw6s\" (UniqueName: \"kubernetes.io/projected/5d5868a6-fe98-44f9-908f-a5c9335098b1-kube-api-access-9vw6s\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.746655 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d5868a6-fe98-44f9-908f-a5c9335098b1" (UID: "5d5868a6-fe98-44f9-908f-a5c9335098b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.755225 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" path="/var/lib/kubelet/pods/76bc43a0-2615-470c-8719-f2081f6ce044/volumes" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.756099 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48702a8-0e3e-4776-9ee8-ef674e38fe1a" path="/var/lib/kubelet/pods/b48702a8-0e3e-4776-9ee8-ef674e38fe1a/volumes" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.757151 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e260e381-c628-4cec-8558-def696f354f3" path="/var/lib/kubelet/pods/e260e381-c628-4cec-8558-def696f354f3/volumes" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.779430 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h8xdp"] Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.824687 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d5868a6-fe98-44f9-908f-a5c9335098b1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:17 crc kubenswrapper[4792]: I0319 17:07:17.955578 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.002137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wm2lm" event={"ID":"03107c0e-b888-4df4-892a-daebb217a18e","Type":"ContainerStarted","Data":"6ce1093c98733ad0c9ca979f6fb001e0e8eb02348927ecb86be0c76bf0dce482"} Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.004464 4792 scope.go:117] "RemoveContainer" containerID="c20386577796509755400ac4213ad52c6a7c543609287288985d21c03aaf24c4" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.007739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg2vl" event={"ID":"5d5868a6-fe98-44f9-908f-a5c9335098b1","Type":"ContainerDied","Data":"a2d176b24544bcddeb112ad0d855aa19ebbe662b5e453f4f3e0c8a95d464e784"} Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.007853 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg2vl" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.011494 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-86jjn" event={"ID":"567d324f-126d-4f06-91df-d2d84fd836f3","Type":"ContainerStarted","Data":"cfe5f948852c69429a2761f42618a607b6376d98d36510e06fcb0c187d584dd9"} Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.029591 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wm2lm" podStartSLOduration=5.622291345 podStartE2EDuration="35.02955213s" podCreationTimestamp="2026-03-19 17:06:43 +0000 UTC" firstStartedPulling="2026-03-19 17:06:46.077926587 +0000 UTC m=+1569.223984127" lastFinishedPulling="2026-03-19 17:07:15.485187372 +0000 UTC m=+1598.631244912" observedRunningTime="2026-03-19 17:07:18.026074424 +0000 UTC m=+1601.172131974" watchObservedRunningTime="2026-03-19 17:07:18.02955213 +0000 UTC m=+1601.175609670" Mar 19 17:07:18 crc kubenswrapper[4792]: E0319 17:07:18.064180 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-6ftwc" podUID="ef634102-a683-498b-ad98-61d470f7fefa" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.071875 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-86jjn" podStartSLOduration=5.419269121 podStartE2EDuration="35.07182786s" podCreationTimestamp="2026-03-19 17:06:43 +0000 UTC" firstStartedPulling="2026-03-19 17:06:45.828381867 +0000 UTC m=+1568.974439407" lastFinishedPulling="2026-03-19 17:07:15.480940616 +0000 UTC m=+1598.626998146" observedRunningTime="2026-03-19 17:07:18.050804833 +0000 UTC m=+1601.196862373" watchObservedRunningTime="2026-03-19 17:07:18.07182786 +0000 UTC m=+1601.217885410" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.150754 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.179920 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jg2vl"] Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.182222 4792 scope.go:117] "RemoveContainer" containerID="f6689d53d79b58bd5d06a5116d45e9cb0e04b0b879c5ba5af5408de1802ca920" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.206241 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jg2vl"] Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.276313 4792 scope.go:117] "RemoveContainer" containerID="33bcfe0dab5aa8f138f612eec29c8813dc0213fa5daf141b367f29abd2af524c" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.318522 4792 scope.go:117] "RemoveContainer" containerID="20d17599e7baf08aa80dd68cd5f4e581b1c695b476e66383f43e5b9e2e64547b" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.423611 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2chz4" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.548467 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-combined-ca-bundle\") pod \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.548636 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bps7q\" (UniqueName: \"kubernetes.io/projected/13b09649-8b3c-4328-97b8-5c5c8d3e198b-kube-api-access-bps7q\") pod \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.548706 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-config\") pod \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\" (UID: \"13b09649-8b3c-4328-97b8-5c5c8d3e198b\") " Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.554151 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b09649-8b3c-4328-97b8-5c5c8d3e198b-kube-api-access-bps7q" (OuterVolumeSpecName: "kube-api-access-bps7q") pod "13b09649-8b3c-4328-97b8-5c5c8d3e198b" (UID: "13b09649-8b3c-4328-97b8-5c5c8d3e198b"). InnerVolumeSpecName "kube-api-access-bps7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.584911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13b09649-8b3c-4328-97b8-5c5c8d3e198b" (UID: "13b09649-8b3c-4328-97b8-5c5c8d3e198b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.585039 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-config" (OuterVolumeSpecName: "config") pod "13b09649-8b3c-4328-97b8-5c5c8d3e198b" (UID: "13b09649-8b3c-4328-97b8-5c5c8d3e198b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.653187 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.653239 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b09649-8b3c-4328-97b8-5c5c8d3e198b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:18 crc kubenswrapper[4792]: I0319 17:07:18.653255 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bps7q\" (UniqueName: \"kubernetes.io/projected/13b09649-8b3c-4328-97b8-5c5c8d3e198b-kube-api-access-bps7q\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.044214 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2chz4" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.045497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2chz4" event={"ID":"13b09649-8b3c-4328-97b8-5c5c8d3e198b","Type":"ContainerDied","Data":"39be2fbe6fcb21953ee9b46da8609c217c6a94a5aa0c1c820ccc377eca37713a"} Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.045544 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39be2fbe6fcb21953ee9b46da8609c217c6a94a5aa0c1c820ccc377eca37713a" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.090344 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8xdp" event={"ID":"398bc201-2c6c-4434-ad7a-208f048b9f5c","Type":"ContainerStarted","Data":"919d8ed8b8f0c3e2484ed415e3b412db4ed9c307d4cdf717f0c84cf8e2050417"} Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.090420 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8xdp" event={"ID":"398bc201-2c6c-4434-ad7a-208f048b9f5c","Type":"ContainerStarted","Data":"3dba0dbe09c24317dcafb172936ba3dc21cb9613188575bcc3c8d5a47c855f45"} Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.116241 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da737c96-aa96-4a26-8fe5-33778519b02d","Type":"ContainerStarted","Data":"482f902e126b3cd1ae01a7f21ff6a7aacd4f8c12e2b1de995d82188875ae87cb"} Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.116291 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da737c96-aa96-4a26-8fe5-33778519b02d","Type":"ContainerStarted","Data":"a99713fdb1f2cf54753375ca4fb6fc97dc9958f68d984b442bf62ec0f589f138"} Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.156980 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373","Type":"ContainerStarted","Data":"3c523ba07fe1239f7ecc3166c35834f363351179822199a3716cb6e18c0d4bd9"} Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.159133 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h8xdp" podStartSLOduration=13.159107137 podStartE2EDuration="13.159107137s" podCreationTimestamp="2026-03-19 17:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:19.132302752 +0000 UTC m=+1602.278360292" watchObservedRunningTime="2026-03-19 17:07:19.159107137 +0000 UTC m=+1602.305164677" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.181488 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f69ff164-0421-4131-92f1-88b1dbbac7d3","Type":"ContainerStarted","Data":"0202957be291e371b746b6e8b63164def65e04a55e4792e0048049b0a584e935"} Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.181605 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f69ff164-0421-4131-92f1-88b1dbbac7d3","Type":"ContainerStarted","Data":"791a84281596446464f2eaab952f2e4b9df221b5cecae9a536aff627e5b2e9a6"} Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.382334 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-658464b84d-mwf85"] Mar 19 17:07:19 crc kubenswrapper[4792]: E0319 17:07:19.382885 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.382902 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" Mar 19 17:07:19 crc kubenswrapper[4792]: E0319 17:07:19.382919 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b09649-8b3c-4328-97b8-5c5c8d3e198b" containerName="neutron-db-sync" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.382927 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b09649-8b3c-4328-97b8-5c5c8d3e198b" containerName="neutron-db-sync" Mar 19 17:07:19 crc kubenswrapper[4792]: E0319 17:07:19.382945 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="extract-utilities" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.382952 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="extract-utilities" Mar 19 17:07:19 crc kubenswrapper[4792]: E0319 17:07:19.382972 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="extract-content" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.382978 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="extract-content" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.383149 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" containerName="registry-server" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.383174 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b09649-8b3c-4328-97b8-5c5c8d3e198b" containerName="neutron-db-sync" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.384298 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.398450 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.398951 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.399159 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.399260 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9ghfj" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.421664 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-658464b84d-mwf85"] Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.439918 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-v76p9"] Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.442016 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.500657 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.521206 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-config\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.521549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.521797 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.522016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.538681 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-httpd-config\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.539103 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-config\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.539513 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btql7\" (UniqueName: \"kubernetes.io/projected/3442fc07-ecbb-4602-9d28-fcbcff219873-kube-api-access-btql7\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.539799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-ovndb-tls-certs\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.540088 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-combined-ca-bundle\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.540384 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgh2\" (UniqueName: \"kubernetes.io/projected/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-kube-api-access-fqgh2\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.541400 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-v76p9"] Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642084 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-config\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642195 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-httpd-config\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642299 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-config\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642324 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btql7\" (UniqueName: \"kubernetes.io/projected/3442fc07-ecbb-4602-9d28-fcbcff219873-kube-api-access-btql7\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-ovndb-tls-certs\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642389 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-combined-ca-bundle\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642405 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqgh2\" (UniqueName: \"kubernetes.io/projected/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-kube-api-access-fqgh2\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.642468 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.643417 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.643940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-config\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.644421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.660093 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.661132 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.671822 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-config\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.677708 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-combined-ca-bundle\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.686754 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-httpd-config\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.689532 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btql7\" (UniqueName: \"kubernetes.io/projected/3442fc07-ecbb-4602-9d28-fcbcff219873-kube-api-access-btql7\") pod \"dnsmasq-dns-5ccc5c4795-v76p9\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.690440 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-ovndb-tls-certs\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.711884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqgh2\" (UniqueName: \"kubernetes.io/projected/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-kube-api-access-fqgh2\") pod \"neutron-658464b84d-mwf85\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.758970 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5868a6-fe98-44f9-908f-a5c9335098b1" path="/var/lib/kubelet/pods/5d5868a6-fe98-44f9-908f-a5c9335098b1/volumes" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.858250 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:19 crc kubenswrapper[4792]: I0319 17:07:19.888183 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.093959 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-9hdvh" podUID="76bc43a0-2615-470c-8719-f2081f6ce044" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.231207 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.231272 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.231316 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.232080 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.232132 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" gracePeriod=600 Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.260410 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f69ff164-0421-4131-92f1-88b1dbbac7d3","Type":"ContainerStarted","Data":"d7642887d1fb972ae5659d67a11e7271a301b5230392701b116dad0b2817e27a"} Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.275508 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da737c96-aa96-4a26-8fe5-33778519b02d","Type":"ContainerStarted","Data":"d0c26199bfce4450860cca013e45a244323038f5192444cb572f50e0cde2e9b8"} Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.305169 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.305147068 podStartE2EDuration="4.305147068s" podCreationTimestamp="2026-03-19 17:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:20.290121366 +0000 UTC m=+1603.436178906" watchObservedRunningTime="2026-03-19 17:07:20.305147068 +0000 UTC m=+1603.451204608" Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.350407 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.35038455 podStartE2EDuration="4.35038455s" podCreationTimestamp="2026-03-19 17:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:20.317379235 +0000 UTC m=+1603.463436775" watchObservedRunningTime="2026-03-19 17:07:20.35038455 +0000 UTC m=+1603.496442090" Mar 19 17:07:20 crc kubenswrapper[4792]: E0319 17:07:20.424610 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.564284 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-v76p9"] Mar 19 17:07:20 crc kubenswrapper[4792]: I0319 17:07:20.649169 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-658464b84d-mwf85"] Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.294855 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" exitCode=0 Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.295219 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c"} Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.295277 4792 scope.go:117] "RemoveContainer" containerID="068a73beae621ae4f956b367fc3282b83e72642257a902caff5addac077ed9f3" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.299011 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:07:21 crc kubenswrapper[4792]: E0319 17:07:21.299592 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.310696 4792 generic.go:334] "Generic (PLEG): container finished" podID="3442fc07-ecbb-4602-9d28-fcbcff219873" containerID="c458994a2fca97572611bbe753b0a1c2a6afb819bd004be359aa262ae9de8118" exitCode=0 Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.311359 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" event={"ID":"3442fc07-ecbb-4602-9d28-fcbcff219873","Type":"ContainerDied","Data":"c458994a2fca97572611bbe753b0a1c2a6afb819bd004be359aa262ae9de8118"} Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.311430 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" event={"ID":"3442fc07-ecbb-4602-9d28-fcbcff219873","Type":"ContainerStarted","Data":"56340c750d1aac1e27ce2635d66a56265776d58fdbd35451d61c3967e338d5ce"} Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.321015 4792 generic.go:334] "Generic (PLEG): container finished" podID="567d324f-126d-4f06-91df-d2d84fd836f3" containerID="cfe5f948852c69429a2761f42618a607b6376d98d36510e06fcb0c187d584dd9" exitCode=0 Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.321110 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-86jjn" event={"ID":"567d324f-126d-4f06-91df-d2d84fd836f3","Type":"ContainerDied","Data":"cfe5f948852c69429a2761f42618a607b6376d98d36510e06fcb0c187d584dd9"} Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.379081 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658464b84d-mwf85" event={"ID":"2e6b95f1-831d-4dd6-b888-ec93ff45f43a","Type":"ContainerStarted","Data":"fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851"} Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.379124 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658464b84d-mwf85" event={"ID":"2e6b95f1-831d-4dd6-b888-ec93ff45f43a","Type":"ContainerStarted","Data":"9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6"} Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.379136 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658464b84d-mwf85" event={"ID":"2e6b95f1-831d-4dd6-b888-ec93ff45f43a","Type":"ContainerStarted","Data":"160d242e941f79761ccbdccca8c891f06fe7f4458805f8dfca8317fed32a4a3f"} Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.380565 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.460550 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-658464b84d-mwf85" podStartSLOduration=2.460535185 podStartE2EDuration="2.460535185s" podCreationTimestamp="2026-03-19 17:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:21.427136468 +0000 UTC m=+1604.573193998" watchObservedRunningTime="2026-03-19 17:07:21.460535185 +0000 UTC m=+1604.606592725" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.670076 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-65c9569ddf-24zz6"] Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.674043 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.676732 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.676930 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.683443 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65c9569ddf-24zz6"] Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.823099 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-public-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.823207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-internal-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.823232 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-ovndb-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.823309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-httpd-config\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.823353 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-config\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.823389 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv6cm\" (UniqueName: \"kubernetes.io/projected/ab62ad1f-f033-470f-ba9b-e75ace44e30e-kube-api-access-gv6cm\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.823413 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-combined-ca-bundle\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.925244 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-public-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.925326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-internal-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.925353 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-ovndb-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.925438 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-httpd-config\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.925498 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-config\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.925547 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv6cm\" (UniqueName: \"kubernetes.io/projected/ab62ad1f-f033-470f-ba9b-e75ace44e30e-kube-api-access-gv6cm\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.925582 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-combined-ca-bundle\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.931642 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-config\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.932282 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-internal-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.933885 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-combined-ca-bundle\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.934511 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-ovndb-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.941173 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-public-tls-certs\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.941563 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-httpd-config\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:21 crc kubenswrapper[4792]: I0319 17:07:21.961128 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv6cm\" (UniqueName: \"kubernetes.io/projected/ab62ad1f-f033-470f-ba9b-e75ace44e30e-kube-api-access-gv6cm\") pod \"neutron-65c9569ddf-24zz6\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:22 crc kubenswrapper[4792]: I0319 17:07:22.001153 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:22 crc kubenswrapper[4792]: I0319 17:07:22.409065 4792 generic.go:334] "Generic (PLEG): container finished" podID="03107c0e-b888-4df4-892a-daebb217a18e" containerID="6ce1093c98733ad0c9ca979f6fb001e0e8eb02348927ecb86be0c76bf0dce482" exitCode=0 Mar 19 17:07:22 crc kubenswrapper[4792]: I0319 17:07:22.409276 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wm2lm" event={"ID":"03107c0e-b888-4df4-892a-daebb217a18e","Type":"ContainerDied","Data":"6ce1093c98733ad0c9ca979f6fb001e0e8eb02348927ecb86be0c76bf0dce482"} Mar 19 17:07:22 crc kubenswrapper[4792]: I0319 17:07:22.413988 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" event={"ID":"3442fc07-ecbb-4602-9d28-fcbcff219873","Type":"ContainerStarted","Data":"b7dc050f256ec5d7c79bc1c2879f36b4843100f3a09d927c650dddcfe3d1c4e0"} Mar 19 17:07:22 crc kubenswrapper[4792]: I0319 17:07:22.414964 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:22 crc kubenswrapper[4792]: I0319 17:07:22.448979 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" podStartSLOduration=3.4489564489999998 podStartE2EDuration="3.448956449s" podCreationTimestamp="2026-03-19 17:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:22.444072975 +0000 UTC m=+1605.590130505" watchObservedRunningTime="2026-03-19 17:07:22.448956449 +0000 UTC m=+1605.595013989" Mar 19 17:07:22 crc kubenswrapper[4792]: I0319 17:07:22.738323 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65c9569ddf-24zz6"] Mar 19 17:07:23 crc kubenswrapper[4792]: I0319 17:07:23.434732 4792 generic.go:334] "Generic (PLEG): container finished" podID="398bc201-2c6c-4434-ad7a-208f048b9f5c" containerID="919d8ed8b8f0c3e2484ed415e3b412db4ed9c307d4cdf717f0c84cf8e2050417" exitCode=0 Mar 19 17:07:23 crc kubenswrapper[4792]: I0319 17:07:23.434772 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8xdp" event={"ID":"398bc201-2c6c-4434-ad7a-208f048b9f5c","Type":"ContainerDied","Data":"919d8ed8b8f0c3e2484ed415e3b412db4ed9c307d4cdf717f0c84cf8e2050417"} Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.823925 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-86jjn" Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.880787 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.894385 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.955956 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm9l5\" (UniqueName: \"kubernetes.io/projected/567d324f-126d-4f06-91df-d2d84fd836f3-kube-api-access-zm9l5\") pod \"567d324f-126d-4f06-91df-d2d84fd836f3\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.956018 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567d324f-126d-4f06-91df-d2d84fd836f3-logs\") pod \"567d324f-126d-4f06-91df-d2d84fd836f3\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.956066 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-combined-ca-bundle\") pod \"567d324f-126d-4f06-91df-d2d84fd836f3\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.956136 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-config-data\") pod \"567d324f-126d-4f06-91df-d2d84fd836f3\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.956363 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-scripts\") pod \"567d324f-126d-4f06-91df-d2d84fd836f3\" (UID: \"567d324f-126d-4f06-91df-d2d84fd836f3\") " Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.958948 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567d324f-126d-4f06-91df-d2d84fd836f3-logs" (OuterVolumeSpecName: "logs") pod "567d324f-126d-4f06-91df-d2d84fd836f3" (UID: "567d324f-126d-4f06-91df-d2d84fd836f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.971644 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567d324f-126d-4f06-91df-d2d84fd836f3-kube-api-access-zm9l5" (OuterVolumeSpecName: "kube-api-access-zm9l5") pod "567d324f-126d-4f06-91df-d2d84fd836f3" (UID: "567d324f-126d-4f06-91df-d2d84fd836f3"). InnerVolumeSpecName "kube-api-access-zm9l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.984233 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-scripts" (OuterVolumeSpecName: "scripts") pod "567d324f-126d-4f06-91df-d2d84fd836f3" (UID: "567d324f-126d-4f06-91df-d2d84fd836f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.997390 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-config-data" (OuterVolumeSpecName: "config-data") pod "567d324f-126d-4f06-91df-d2d84fd836f3" (UID: "567d324f-126d-4f06-91df-d2d84fd836f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:25 crc kubenswrapper[4792]: I0319 17:07:25.999483 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "567d324f-126d-4f06-91df-d2d84fd836f3" (UID: "567d324f-126d-4f06-91df-d2d84fd836f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.058621 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-credential-keys\") pod \"398bc201-2c6c-4434-ad7a-208f048b9f5c\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.059082 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-combined-ca-bundle\") pod \"398bc201-2c6c-4434-ad7a-208f048b9f5c\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.059165 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-config-data\") pod \"398bc201-2c6c-4434-ad7a-208f048b9f5c\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.059231 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-combined-ca-bundle\") pod \"03107c0e-b888-4df4-892a-daebb217a18e\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.059395 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-fernet-keys\") pod \"398bc201-2c6c-4434-ad7a-208f048b9f5c\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.059459 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-scripts\") pod \"398bc201-2c6c-4434-ad7a-208f048b9f5c\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.059511 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-db-sync-config-data\") pod \"03107c0e-b888-4df4-892a-daebb217a18e\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.059631 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvlvc\" (UniqueName: \"kubernetes.io/projected/03107c0e-b888-4df4-892a-daebb217a18e-kube-api-access-zvlvc\") pod \"03107c0e-b888-4df4-892a-daebb217a18e\" (UID: \"03107c0e-b888-4df4-892a-daebb217a18e\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.059710 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l9fx\" (UniqueName: \"kubernetes.io/projected/398bc201-2c6c-4434-ad7a-208f048b9f5c-kube-api-access-7l9fx\") pod \"398bc201-2c6c-4434-ad7a-208f048b9f5c\" (UID: \"398bc201-2c6c-4434-ad7a-208f048b9f5c\") " Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.060376 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.060422 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.060435 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/567d324f-126d-4f06-91df-d2d84fd836f3-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.060447 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm9l5\" (UniqueName: \"kubernetes.io/projected/567d324f-126d-4f06-91df-d2d84fd836f3-kube-api-access-zm9l5\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.060460 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567d324f-126d-4f06-91df-d2d84fd836f3-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.064080 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "398bc201-2c6c-4434-ad7a-208f048b9f5c" (UID: "398bc201-2c6c-4434-ad7a-208f048b9f5c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.064996 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-scripts" (OuterVolumeSpecName: "scripts") pod "398bc201-2c6c-4434-ad7a-208f048b9f5c" (UID: "398bc201-2c6c-4434-ad7a-208f048b9f5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.065176 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398bc201-2c6c-4434-ad7a-208f048b9f5c-kube-api-access-7l9fx" (OuterVolumeSpecName: "kube-api-access-7l9fx") pod "398bc201-2c6c-4434-ad7a-208f048b9f5c" (UID: "398bc201-2c6c-4434-ad7a-208f048b9f5c"). InnerVolumeSpecName "kube-api-access-7l9fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.066213 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "03107c0e-b888-4df4-892a-daebb217a18e" (UID: "03107c0e-b888-4df4-892a-daebb217a18e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.068470 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03107c0e-b888-4df4-892a-daebb217a18e-kube-api-access-zvlvc" (OuterVolumeSpecName: "kube-api-access-zvlvc") pod "03107c0e-b888-4df4-892a-daebb217a18e" (UID: "03107c0e-b888-4df4-892a-daebb217a18e"). InnerVolumeSpecName "kube-api-access-zvlvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.072892 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "398bc201-2c6c-4434-ad7a-208f048b9f5c" (UID: "398bc201-2c6c-4434-ad7a-208f048b9f5c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.103801 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-config-data" (OuterVolumeSpecName: "config-data") pod "398bc201-2c6c-4434-ad7a-208f048b9f5c" (UID: "398bc201-2c6c-4434-ad7a-208f048b9f5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.104114 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "398bc201-2c6c-4434-ad7a-208f048b9f5c" (UID: "398bc201-2c6c-4434-ad7a-208f048b9f5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.104355 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03107c0e-b888-4df4-892a-daebb217a18e" (UID: "03107c0e-b888-4df4-892a-daebb217a18e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.162911 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvlvc\" (UniqueName: \"kubernetes.io/projected/03107c0e-b888-4df4-892a-daebb217a18e-kube-api-access-zvlvc\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.162946 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l9fx\" (UniqueName: \"kubernetes.io/projected/398bc201-2c6c-4434-ad7a-208f048b9f5c-kube-api-access-7l9fx\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.162957 4792 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.162968 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.163062 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.163072 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.163080 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.163087 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398bc201-2c6c-4434-ad7a-208f048b9f5c-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.163097 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03107c0e-b888-4df4-892a-daebb217a18e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.407018 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.407101 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.448544 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.461828 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.461949 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.463406 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.499491 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.510461 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8xdp" event={"ID":"398bc201-2c6c-4434-ad7a-208f048b9f5c","Type":"ContainerDied","Data":"3dba0dbe09c24317dcafb172936ba3dc21cb9613188575bcc3c8d5a47c855f45"} Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.510502 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dba0dbe09c24317dcafb172936ba3dc21cb9613188575bcc3c8d5a47c855f45" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.510581 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8xdp" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.515370 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wm2lm" event={"ID":"03107c0e-b888-4df4-892a-daebb217a18e","Type":"ContainerDied","Data":"c85e79a1869cd55ce77831fcd056b3003a015143da3f12548dc0d635197f00e1"} Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.515436 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c85e79a1869cd55ce77831fcd056b3003a015143da3f12548dc0d635197f00e1" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.515396 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wm2lm" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.519817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373","Type":"ContainerStarted","Data":"8836aac468233d2dab1a73339d5a3de6126fc6cbe62b9330d484401e592d78f3"} Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.529847 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-86jjn" event={"ID":"567d324f-126d-4f06-91df-d2d84fd836f3","Type":"ContainerDied","Data":"b1da40b21b70316e7af01c523d50d97e4b6417fdbaca03bfba9123bc0fbabc3b"} Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.529893 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1da40b21b70316e7af01c523d50d97e4b6417fdbaca03bfba9123bc0fbabc3b" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.529968 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-86jjn" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.533157 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.538002 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65c9569ddf-24zz6" event={"ID":"ab62ad1f-f033-470f-ba9b-e75ace44e30e","Type":"ContainerStarted","Data":"531d789062b377ce6f3ad30c627bcb3aee6ab58f003ac30243da3993acc8b218"} Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.538055 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65c9569ddf-24zz6" event={"ID":"ab62ad1f-f033-470f-ba9b-e75ace44e30e","Type":"ContainerStarted","Data":"e186b30476926b21afbd7781efe2b3fce255f4665739e11011daa22ce0f54925"} Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.538071 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65c9569ddf-24zz6" event={"ID":"ab62ad1f-f033-470f-ba9b-e75ace44e30e","Type":"ContainerStarted","Data":"e679f05810fb1fcadfd37900793b1d4355ec0e9c0fc088f12a2c45110463f2bd"} Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.539324 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.539348 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.539358 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.539369 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.539379 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.577699 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-65c9569ddf-24zz6" podStartSLOduration=5.577680279 podStartE2EDuration="5.577680279s" podCreationTimestamp="2026-03-19 17:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:26.575410587 +0000 UTC m=+1609.721468127" watchObservedRunningTime="2026-03-19 17:07:26.577680279 +0000 UTC m=+1609.723737829" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.970751 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7c679c588-pcfbf"] Mar 19 17:07:26 crc kubenswrapper[4792]: E0319 17:07:26.971387 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398bc201-2c6c-4434-ad7a-208f048b9f5c" containerName="keystone-bootstrap" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.971401 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="398bc201-2c6c-4434-ad7a-208f048b9f5c" containerName="keystone-bootstrap" Mar 19 17:07:26 crc kubenswrapper[4792]: E0319 17:07:26.971428 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567d324f-126d-4f06-91df-d2d84fd836f3" containerName="placement-db-sync" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.971436 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="567d324f-126d-4f06-91df-d2d84fd836f3" containerName="placement-db-sync" Mar 19 17:07:26 crc kubenswrapper[4792]: E0319 17:07:26.971455 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03107c0e-b888-4df4-892a-daebb217a18e" containerName="barbican-db-sync" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.971460 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="03107c0e-b888-4df4-892a-daebb217a18e" containerName="barbican-db-sync" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.971656 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="03107c0e-b888-4df4-892a-daebb217a18e" containerName="barbican-db-sync" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.971677 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="567d324f-126d-4f06-91df-d2d84fd836f3" containerName="placement-db-sync" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.971692 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="398bc201-2c6c-4434-ad7a-208f048b9f5c" containerName="keystone-bootstrap" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.979377 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.982588 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.983106 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.983321 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kztlw" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.984116 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.990881 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c679c588-pcfbf"] Mar 19 17:07:26 crc kubenswrapper[4792]: I0319 17:07:26.996229 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.081608 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-config-data\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.081708 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-public-tls-certs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.081756 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-scripts\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.081802 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-internal-tls-certs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.081892 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrlw\" (UniqueName: \"kubernetes.io/projected/69a561ed-717c-43e0-82b3-42bb63bb68b5-kube-api-access-wkrlw\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.082045 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-combined-ca-bundle\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.082080 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69a561ed-717c-43e0-82b3-42bb63bb68b5-logs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.097937 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-68456dfd85-xsh6s"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.099524 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.105647 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.105885 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.112328 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.113022 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrkzh" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.113309 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.115876 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.128043 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68456dfd85-xsh6s"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.184689 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-fernet-keys\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.184768 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-combined-ca-bundle\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.184804 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69a561ed-717c-43e0-82b3-42bb63bb68b5-logs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.184873 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-config-data\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.184930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-config-data\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.184994 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7sh4\" (UniqueName: \"kubernetes.io/projected/a782fd7c-52d9-472c-98f4-a390ca0d94b6-kube-api-access-g7sh4\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-public-tls-certs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185087 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-scripts\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185129 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-public-tls-certs\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185162 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-internal-tls-certs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185186 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-credential-keys\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185239 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-scripts\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185290 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkrlw\" (UniqueName: \"kubernetes.io/projected/69a561ed-717c-43e0-82b3-42bb63bb68b5-kube-api-access-wkrlw\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185329 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-internal-tls-certs\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.185455 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-combined-ca-bundle\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.186292 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69a561ed-717c-43e0-82b3-42bb63bb68b5-logs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.213745 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-scripts\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.215673 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-combined-ca-bundle\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.216438 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-public-tls-certs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.228755 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-internal-tls-certs\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.228833 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6f947cc86b-s9rw6"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.231168 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.235272 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.235491 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7t5sq" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.246398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkrlw\" (UniqueName: \"kubernetes.io/projected/69a561ed-717c-43e0-82b3-42bb63bb68b5-kube-api-access-wkrlw\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.246675 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-config-data\") pod \"placement-7c679c588-pcfbf\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.252359 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.253990 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6f947cc86b-s9rw6"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.288264 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-internal-tls-certs\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.288330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.288407 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-combined-ca-bundle\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.289653 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data-custom\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.289734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-fernet-keys\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.289755 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-logs\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.289787 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-combined-ca-bundle\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.289872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-config-data\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.289930 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5gds\" (UniqueName: \"kubernetes.io/projected/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-kube-api-access-c5gds\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.290000 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7sh4\" (UniqueName: \"kubernetes.io/projected/a782fd7c-52d9-472c-98f4-a390ca0d94b6-kube-api-access-g7sh4\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.290107 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-public-tls-certs\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.290138 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-credential-keys\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.290185 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-scripts\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.302858 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-config-data\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.307017 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-public-tls-certs\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.315275 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-scripts\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.317569 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-credential-keys\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.318496 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-internal-tls-certs\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.319284 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-combined-ca-bundle\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.326587 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a782fd7c-52d9-472c-98f4-a390ca0d94b6-fernet-keys\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.327276 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c8bf6b7df-4wwcd"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.335670 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.350586 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.367802 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.367893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7sh4\" (UniqueName: \"kubernetes.io/projected/a782fd7c-52d9-472c-98f4-a390ca0d94b6-kube-api-access-g7sh4\") pod \"keystone-68456dfd85-xsh6s\" (UID: \"a782fd7c-52d9-472c-98f4-a390ca0d94b6\") " pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392575 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392642 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5gds\" (UniqueName: \"kubernetes.io/projected/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-kube-api-access-c5gds\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392739 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data-custom\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392796 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1314b54d-d592-40e5-909c-179fbc624d8e-logs\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392831 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392880 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9s2\" (UniqueName: \"kubernetes.io/projected/1314b54d-d592-40e5-909c-179fbc624d8e-kube-api-access-gk9s2\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392919 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data-custom\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392947 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-logs\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392963 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-combined-ca-bundle\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.392987 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-combined-ca-bundle\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.398169 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-logs\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.410061 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-combined-ca-bundle\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.422573 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c8bf6b7df-4wwcd"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.423143 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.424413 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.428446 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data-custom\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.473855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5gds\" (UniqueName: \"kubernetes.io/projected/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-kube-api-access-c5gds\") pod \"barbican-keystone-listener-6f947cc86b-s9rw6\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.495177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1314b54d-d592-40e5-909c-179fbc624d8e-logs\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.495257 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9s2\" (UniqueName: \"kubernetes.io/projected/1314b54d-d592-40e5-909c-179fbc624d8e-kube-api-access-gk9s2\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.495313 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-combined-ca-bundle\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.495365 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.495441 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data-custom\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.496542 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1314b54d-d592-40e5-909c-179fbc624d8e-logs\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.499454 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data-custom\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.509040 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.516523 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-combined-ca-bundle\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.548421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9s2\" (UniqueName: \"kubernetes.io/projected/1314b54d-d592-40e5-909c-179fbc624d8e-kube-api-access-gk9s2\") pod \"barbican-worker-5c8bf6b7df-4wwcd\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.575374 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-v76p9"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.575915 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" podUID="3442fc07-ecbb-4602-9d28-fcbcff219873" containerName="dnsmasq-dns" containerID="cri-o://b7dc050f256ec5d7c79bc1c2879f36b4843100f3a09d927c650dddcfe3d1c4e0" gracePeriod=10 Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.591112 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.638037 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5789c5b8cd-gst5f"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.640597 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.650677 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5789c5b8cd-gst5f"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.673193 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-vz7kl"] Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.674772 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.682966 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.683908 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:27 crc kubenswrapper[4792]: I0319 17:07:27.710919 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.726919 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-vz7kl"] Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.749103 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6bf88c4df4-qpgln"] Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.775017 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6766d67c9f-bwz6s"] Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.781423 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.803987 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819144 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-svc\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819279 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzhj\" (UniqueName: \"kubernetes.io/projected/aa59a063-31ae-41e0-86a5-020f60d0113a-kube-api-access-gbzhj\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819414 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km498\" (UniqueName: \"kubernetes.io/projected/35fedaab-ff86-4533-933f-76c7143d9614-kube-api-access-km498\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819537 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-config\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819597 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819632 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data-custom\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819669 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819695 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819735 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa59a063-31ae-41e0-86a5-020f60d0113a-logs\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819814 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.819858 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-combined-ca-bundle\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921372 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-config-data-custom\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-svc\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921436 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-config-data-custom\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921457 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-logs\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-config-data\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921498 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-config-data\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/053f5f35-2164-43eb-9223-f36a5de46700-logs\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921551 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbzhj\" (UniqueName: \"kubernetes.io/projected/aa59a063-31ae-41e0-86a5-020f60d0113a-kube-api-access-gbzhj\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.921580 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2l7\" (UniqueName: \"kubernetes.io/projected/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-kube-api-access-7l2l7\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.928787 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-svc\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940545 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km498\" (UniqueName: \"kubernetes.io/projected/35fedaab-ff86-4533-933f-76c7143d9614-kube-api-access-km498\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940691 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-config\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940776 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data-custom\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940862 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940910 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa59a063-31ae-41e0-86a5-020f60d0113a-logs\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940968 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-combined-ca-bundle\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.940996 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.941018 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-combined-ca-bundle\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.941065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttxs\" (UniqueName: \"kubernetes.io/projected/053f5f35-2164-43eb-9223-f36a5de46700-kube-api-access-4ttxs\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.941157 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-combined-ca-bundle\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.942527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-config\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.943071 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa59a063-31ae-41e0-86a5-020f60d0113a-logs\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.943182 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.943709 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.944343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:27.986320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data-custom\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.002012 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.034890 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km498\" (UniqueName: \"kubernetes.io/projected/35fedaab-ff86-4533-933f-76c7143d9614-kube-api-access-km498\") pod \"dnsmasq-dns-688c87cc99-vz7kl\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.035433 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-combined-ca-bundle\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.046955 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-combined-ca-bundle\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047031 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ttxs\" (UniqueName: \"kubernetes.io/projected/053f5f35-2164-43eb-9223-f36a5de46700-kube-api-access-4ttxs\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047126 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-combined-ca-bundle\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-config-data-custom\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-config-data-custom\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-logs\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-config-data\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-config-data\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047361 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/053f5f35-2164-43eb-9223-f36a5de46700-logs\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047424 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l2l7\" (UniqueName: \"kubernetes.io/projected/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-kube-api-access-7l2l7\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.047668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbzhj\" (UniqueName: \"kubernetes.io/projected/aa59a063-31ae-41e0-86a5-020f60d0113a-kube-api-access-gbzhj\") pod \"barbican-api-5789c5b8cd-gst5f\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.049464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/053f5f35-2164-43eb-9223-f36a5de46700-logs\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.049860 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-logs\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.058628 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-combined-ca-bundle\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.108632 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-config-data-custom\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.118595 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053f5f35-2164-43eb-9223-f36a5de46700-config-data\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.125725 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-config-data\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.127522 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-combined-ca-bundle\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.176157 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ttxs\" (UniqueName: \"kubernetes.io/projected/053f5f35-2164-43eb-9223-f36a5de46700-kube-api-access-4ttxs\") pod \"barbican-worker-6766d67c9f-bwz6s\" (UID: \"053f5f35-2164-43eb-9223-f36a5de46700\") " pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.176579 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-config-data-custom\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.197048 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l2l7\" (UniqueName: \"kubernetes.io/projected/2b323aac-f5a2-4adf-8d27-3c1b194b3b3f-kube-api-access-7l2l7\") pod \"barbican-keystone-listener-6bf88c4df4-qpgln\" (UID: \"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f\") " pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.393422 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6bf88c4df4-qpgln"] Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.393468 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6766d67c9f-bwz6s"] Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.393506 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-678bf6fff8-vd4c4"] Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.395780 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-678bf6fff8-vd4c4"] Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.395913 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.496795 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.511060 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.525793 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data-custom\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.525906 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5fm\" (UniqueName: \"kubernetes.io/projected/03771a31-1a24-4d59-a92a-31f89f9bc89d-kube-api-access-cl5fm\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.525989 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03771a31-1a24-4d59-a92a-31f89f9bc89d-logs\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.526073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.526175 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-combined-ca-bundle\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.584973 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6766d67c9f-bwz6s" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.628386 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-combined-ca-bundle\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.628573 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data-custom\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.628599 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl5fm\" (UniqueName: \"kubernetes.io/projected/03771a31-1a24-4d59-a92a-31f89f9bc89d-kube-api-access-cl5fm\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.628650 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03771a31-1a24-4d59-a92a-31f89f9bc89d-logs\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.628700 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.632796 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03771a31-1a24-4d59-a92a-31f89f9bc89d-logs\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.646862 4792 generic.go:334] "Generic (PLEG): container finished" podID="3442fc07-ecbb-4602-9d28-fcbcff219873" containerID="b7dc050f256ec5d7c79bc1c2879f36b4843100f3a09d927c650dddcfe3d1c4e0" exitCode=0 Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.646966 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.646975 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.648307 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" event={"ID":"3442fc07-ecbb-4602-9d28-fcbcff219873","Type":"ContainerDied","Data":"b7dc050f256ec5d7c79bc1c2879f36b4843100f3a09d927c650dddcfe3d1c4e0"} Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.652427 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-combined-ca-bundle\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.655884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl5fm\" (UniqueName: \"kubernetes.io/projected/03771a31-1a24-4d59-a92a-31f89f9bc89d-kube-api-access-cl5fm\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.713965 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data-custom\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.716266 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data\") pod \"barbican-api-678bf6fff8-vd4c4\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.914321 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" Mar 19 17:07:28 crc kubenswrapper[4792]: I0319 17:07:28.927903 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.015022 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c679c588-pcfbf"] Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.028719 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6f947cc86b-s9rw6"] Mar 19 17:07:29 crc kubenswrapper[4792]: W0319 17:07:29.044271 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69a561ed_717c_43e0_82b3_42bb63bb68b5.slice/crio-1ab6386cd60a635190fec871aa66b6a9aee0bfa3b8d822d3813634c72d48ca38 WatchSource:0}: Error finding container 1ab6386cd60a635190fec871aa66b6a9aee0bfa3b8d822d3813634c72d48ca38: Status 404 returned error can't find the container with id 1ab6386cd60a635190fec871aa66b6a9aee0bfa3b8d822d3813634c72d48ca38 Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.045031 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68456dfd85-xsh6s"] Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.132157 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c8bf6b7df-4wwcd"] Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.437030 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.596895 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btql7\" (UniqueName: \"kubernetes.io/projected/3442fc07-ecbb-4602-9d28-fcbcff219873-kube-api-access-btql7\") pod \"3442fc07-ecbb-4602-9d28-fcbcff219873\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.597142 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-swift-storage-0\") pod \"3442fc07-ecbb-4602-9d28-fcbcff219873\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.597298 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-nb\") pod \"3442fc07-ecbb-4602-9d28-fcbcff219873\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.597399 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-config\") pod \"3442fc07-ecbb-4602-9d28-fcbcff219873\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.598794 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-sb\") pod \"3442fc07-ecbb-4602-9d28-fcbcff219873\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.598965 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-svc\") pod \"3442fc07-ecbb-4602-9d28-fcbcff219873\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.645803 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3442fc07-ecbb-4602-9d28-fcbcff219873-kube-api-access-btql7" (OuterVolumeSpecName: "kube-api-access-btql7") pod "3442fc07-ecbb-4602-9d28-fcbcff219873" (UID: "3442fc07-ecbb-4602-9d28-fcbcff219873"). InnerVolumeSpecName "kube-api-access-btql7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.709630 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btql7\" (UniqueName: \"kubernetes.io/projected/3442fc07-ecbb-4602-9d28-fcbcff219873-kube-api-access-btql7\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.711341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" event={"ID":"3442fc07-ecbb-4602-9d28-fcbcff219873","Type":"ContainerDied","Data":"56340c750d1aac1e27ce2635d66a56265776d58fdbd35451d61c3967e338d5ce"} Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.711407 4792 scope.go:117] "RemoveContainer" containerID="b7dc050f256ec5d7c79bc1c2879f36b4843100f3a09d927c650dddcfe3d1c4e0" Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.711583 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-v76p9" Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.712824 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5789c5b8cd-gst5f"] Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.728373 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-vz7kl"] Mar 19 17:07:29 crc kubenswrapper[4792]: I0319 17:07:29.734580 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" event={"ID":"1314b54d-d592-40e5-909c-179fbc624d8e","Type":"ContainerStarted","Data":"a5571fa9ce7de71a22cf4956e4306e395aeaa2ce63dd7582976d9e219d6771f6"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.029445 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3442fc07-ecbb-4602-9d28-fcbcff219873" (UID: "3442fc07-ecbb-4602-9d28-fcbcff219873"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.041155 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-sb\") pod \"3442fc07-ecbb-4602-9d28-fcbcff219873\" (UID: \"3442fc07-ecbb-4602-9d28-fcbcff219873\") " Mar 19 17:07:30 crc kubenswrapper[4792]: W0319 17:07:30.042154 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3442fc07-ecbb-4602-9d28-fcbcff219873/volumes/kubernetes.io~configmap/ovsdbserver-sb Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.042170 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3442fc07-ecbb-4602-9d28-fcbcff219873" (UID: "3442fc07-ecbb-4602-9d28-fcbcff219873"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.053057 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3442fc07-ecbb-4602-9d28-fcbcff219873" (UID: "3442fc07-ecbb-4602-9d28-fcbcff219873"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.074553 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3442fc07-ecbb-4602-9d28-fcbcff219873" (UID: "3442fc07-ecbb-4602-9d28-fcbcff219873"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.139708 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3442fc07-ecbb-4602-9d28-fcbcff219873" (UID: "3442fc07-ecbb-4602-9d28-fcbcff219873"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.144042 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.144070 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.144081 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.144090 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.170471 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-config" (OuterVolumeSpecName: "config") pod "3442fc07-ecbb-4602-9d28-fcbcff219873" (UID: "3442fc07-ecbb-4602-9d28-fcbcff219873"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.245792 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3442fc07-ecbb-4602-9d28-fcbcff219873-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.281303 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6766d67c9f-bwz6s"] Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.281347 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6bf88c4df4-qpgln"] Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.281363 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" event={"ID":"f8e3b8f7-f805-459b-8669-c2b0d77b2cea","Type":"ContainerStarted","Data":"b18dc4e4b43c6528e38332bf56cd0852de08ab9bbff7070d85d6aca1991cc604"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.281383 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-678bf6fff8-vd4c4"] Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.281398 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c679c588-pcfbf" event={"ID":"69a561ed-717c-43e0-82b3-42bb63bb68b5","Type":"ContainerStarted","Data":"f16e9c510929366875024e6d0538c492fd1f793c410c58461e058867de95a88b"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.281410 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c679c588-pcfbf" event={"ID":"69a561ed-717c-43e0-82b3-42bb63bb68b5","Type":"ContainerStarted","Data":"1ab6386cd60a635190fec871aa66b6a9aee0bfa3b8d822d3813634c72d48ca38"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.281419 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68456dfd85-xsh6s" event={"ID":"a782fd7c-52d9-472c-98f4-a390ca0d94b6","Type":"ContainerStarted","Data":"18b645b533c73a9767fd3b88c182256eaafd8ffbc5090d078c62c1fd04715795"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.323585 4792 scope.go:117] "RemoveContainer" containerID="c458994a2fca97572611bbe753b0a1c2a6afb819bd004be359aa262ae9de8118" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.376751 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-v76p9"] Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.418264 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-v76p9"] Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.769127 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.769255 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.770024 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.798745 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5789c5b8cd-gst5f" event={"ID":"aa59a063-31ae-41e0-86a5-020f60d0113a","Type":"ContainerStarted","Data":"1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.798789 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5789c5b8cd-gst5f" event={"ID":"aa59a063-31ae-41e0-86a5-020f60d0113a","Type":"ContainerStarted","Data":"1fb4a38487621056f1fff23c57626657667234767f033d33dd1d900f19c6a46b"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.808604 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" event={"ID":"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f","Type":"ContainerStarted","Data":"dfd7e756f63c56dfa51005e02287741d5863dc53f3dced3b0fcfade044c4128c"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.821735 4792 generic.go:334] "Generic (PLEG): container finished" podID="35fedaab-ff86-4533-933f-76c7143d9614" containerID="037681b5cbe975348b0b08b9ba09a4857810b45b4e703f5610825637c7d58455" exitCode=0 Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.821793 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" event={"ID":"35fedaab-ff86-4533-933f-76c7143d9614","Type":"ContainerDied","Data":"037681b5cbe975348b0b08b9ba09a4857810b45b4e703f5610825637c7d58455"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.821816 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" event={"ID":"35fedaab-ff86-4533-933f-76c7143d9614","Type":"ContainerStarted","Data":"352bbcf81ea503f3086cfea649a4a88844226f4c9784d70710faec7593534242"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.842519 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.842573 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.859034 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68456dfd85-xsh6s" event={"ID":"a782fd7c-52d9-472c-98f4-a390ca0d94b6","Type":"ContainerStarted","Data":"9ae47cd3d93637032ba79d5ba758d953f70684891ca759528959da1e2c819f4c"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.859972 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.917611 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6766d67c9f-bwz6s" event={"ID":"053f5f35-2164-43eb-9223-f36a5de46700","Type":"ContainerStarted","Data":"f4fe4d4e0ee08506e2223e9ddb1a4cbb3ac155f9680603c3bfb0d6877a7b4bcf"} Mar 19 17:07:30 crc kubenswrapper[4792]: I0319 17:07:30.937233 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-678bf6fff8-vd4c4" event={"ID":"03771a31-1a24-4d59-a92a-31f89f9bc89d","Type":"ContainerStarted","Data":"0ddd27c61147711561db2e7a7361fa0e41051f108c0d6789796cf2c2b0030f83"} Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.014969 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-68456dfd85-xsh6s" podStartSLOduration=4.014947509 podStartE2EDuration="4.014947509s" podCreationTimestamp="2026-03-19 17:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:30.928140066 +0000 UTC m=+1614.074197606" watchObservedRunningTime="2026-03-19 17:07:31.014947509 +0000 UTC m=+1614.161005049" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.466158 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5789c5b8cd-gst5f"] Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.502812 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c5884965b-5vqgk"] Mar 19 17:07:31 crc kubenswrapper[4792]: E0319 17:07:31.503364 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3442fc07-ecbb-4602-9d28-fcbcff219873" containerName="init" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.503380 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3442fc07-ecbb-4602-9d28-fcbcff219873" containerName="init" Mar 19 17:07:31 crc kubenswrapper[4792]: E0319 17:07:31.503390 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3442fc07-ecbb-4602-9d28-fcbcff219873" containerName="dnsmasq-dns" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.503395 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3442fc07-ecbb-4602-9d28-fcbcff219873" containerName="dnsmasq-dns" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.503608 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3442fc07-ecbb-4602-9d28-fcbcff219873" containerName="dnsmasq-dns" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.504778 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.508185 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.512424 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.542295 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c5884965b-5vqgk"] Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.602740 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xdlr\" (UniqueName: \"kubernetes.io/projected/906667a8-fd5c-499a-9e1c-6fc52661d893-kube-api-access-2xdlr\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.602868 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906667a8-fd5c-499a-9e1c-6fc52661d893-logs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.603125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-public-tls-certs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.603178 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-config-data-custom\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.603262 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-internal-tls-certs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.603302 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-config-data\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.603427 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-combined-ca-bundle\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.707225 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-internal-tls-certs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.707284 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-config-data\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.707366 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-combined-ca-bundle\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.707411 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xdlr\" (UniqueName: \"kubernetes.io/projected/906667a8-fd5c-499a-9e1c-6fc52661d893-kube-api-access-2xdlr\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.707456 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906667a8-fd5c-499a-9e1c-6fc52661d893-logs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.707544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-public-tls-certs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.707579 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-config-data-custom\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.708449 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906667a8-fd5c-499a-9e1c-6fc52661d893-logs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.712421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-public-tls-certs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.712901 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-config-data-custom\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.717142 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-config-data\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.730811 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-combined-ca-bundle\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.737424 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/906667a8-fd5c-499a-9e1c-6fc52661d893-internal-tls-certs\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.750586 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xdlr\" (UniqueName: \"kubernetes.io/projected/906667a8-fd5c-499a-9e1c-6fc52661d893-kube-api-access-2xdlr\") pod \"barbican-api-7c5884965b-5vqgk\" (UID: \"906667a8-fd5c-499a-9e1c-6fc52661d893\") " pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.766419 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3442fc07-ecbb-4602-9d28-fcbcff219873" path="/var/lib/kubelet/pods/3442fc07-ecbb-4602-9d28-fcbcff219873/volumes" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.836594 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.953403 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-678bf6fff8-vd4c4" event={"ID":"03771a31-1a24-4d59-a92a-31f89f9bc89d","Type":"ContainerStarted","Data":"11d98627ef0b353d3a6d480e83afaa5c50c99a2db1155bff44699a3e9f3a1a3e"} Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.964545 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c679c588-pcfbf" event={"ID":"69a561ed-717c-43e0-82b3-42bb63bb68b5","Type":"ContainerStarted","Data":"00bf7acbdf98d22fa5bdbb646f387a0b0040a58f9197e80e83d17e17987bcb99"} Mar 19 17:07:31 crc kubenswrapper[4792]: I0319 17:07:31.992357 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7c679c588-pcfbf" podStartSLOduration=5.99234126 podStartE2EDuration="5.99234126s" podCreationTimestamp="2026-03-19 17:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:31.98211804 +0000 UTC m=+1615.128175600" watchObservedRunningTime="2026-03-19 17:07:31.99234126 +0000 UTC m=+1615.138398800" Mar 19 17:07:32 crc kubenswrapper[4792]: I0319 17:07:32.740953 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:07:32 crc kubenswrapper[4792]: E0319 17:07:32.741516 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.005472 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5789c5b8cd-gst5f" event={"ID":"aa59a063-31ae-41e0-86a5-020f60d0113a","Type":"ContainerStarted","Data":"5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145"} Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.005813 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5789c5b8cd-gst5f" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api-log" containerID="cri-o://1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53" gracePeriod=30 Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.007321 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5789c5b8cd-gst5f" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api" containerID="cri-o://5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145" gracePeriod=30 Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.007430 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.007467 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.023513 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r6f9z" event={"ID":"cdaaa799-71ff-429b-86fe-bbe4e903984f","Type":"ContainerStarted","Data":"cdab2a29e594bc9da55ff2d3e3cbed2d7331b9e2287aff1b5bfd36d28b40af03"} Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.050448 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6ftwc" event={"ID":"ef634102-a683-498b-ad98-61d470f7fefa","Type":"ContainerStarted","Data":"be1d7532550a7578e71a2043fef89ab8d93bef6083d985c1c48af294bd01a3c6"} Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.074465 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" event={"ID":"35fedaab-ff86-4533-933f-76c7143d9614","Type":"ContainerStarted","Data":"d1dd9690b40bcc47a0a6dbe35ec19adee2afdcdeaaf7595f56d550aa61e1784a"} Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.074689 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.074770 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.076057 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.088880 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5789c5b8cd-gst5f" podStartSLOduration=6.088862721 podStartE2EDuration="6.088862721s" podCreationTimestamp="2026-03-19 17:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:33.037417019 +0000 UTC m=+1616.183474569" watchObservedRunningTime="2026-03-19 17:07:33.088862721 +0000 UTC m=+1616.234920261" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.106933 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-r6f9z" podStartSLOduration=5.313359734 podStartE2EDuration="50.106912407s" podCreationTimestamp="2026-03-19 17:06:43 +0000 UTC" firstStartedPulling="2026-03-19 17:06:45.839668147 +0000 UTC m=+1568.985725687" lastFinishedPulling="2026-03-19 17:07:30.63322083 +0000 UTC m=+1613.779278360" observedRunningTime="2026-03-19 17:07:33.059273489 +0000 UTC m=+1616.205331029" watchObservedRunningTime="2026-03-19 17:07:33.106912407 +0000 UTC m=+1616.252969967" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.115536 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6ftwc" podStartSLOduration=6.440085373 podStartE2EDuration="50.115513703s" podCreationTimestamp="2026-03-19 17:06:43 +0000 UTC" firstStartedPulling="2026-03-19 17:06:45.842298439 +0000 UTC m=+1568.988355979" lastFinishedPulling="2026-03-19 17:07:29.517726769 +0000 UTC m=+1612.663784309" observedRunningTime="2026-03-19 17:07:33.087301939 +0000 UTC m=+1616.233359479" watchObservedRunningTime="2026-03-19 17:07:33.115513703 +0000 UTC m=+1616.261571243" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.138802 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" podStartSLOduration=6.138784802 podStartE2EDuration="6.138784802s" podCreationTimestamp="2026-03-19 17:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:33.124265623 +0000 UTC m=+1616.270323163" watchObservedRunningTime="2026-03-19 17:07:33.138784802 +0000 UTC m=+1616.284842342" Mar 19 17:07:33 crc kubenswrapper[4792]: I0319 17:07:33.912905 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c5884965b-5vqgk"] Mar 19 17:07:33 crc kubenswrapper[4792]: W0319 17:07:33.921759 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906667a8_fd5c_499a_9e1c_6fc52661d893.slice/crio-99050fe440fc5afe0b00ca9dc3779df779d4180521a423150f356bed45cb81e5 WatchSource:0}: Error finding container 99050fe440fc5afe0b00ca9dc3779df779d4180521a423150f356bed45cb81e5: Status 404 returned error can't find the container with id 99050fe440fc5afe0b00ca9dc3779df779d4180521a423150f356bed45cb81e5 Mar 19 17:07:34 crc kubenswrapper[4792]: I0319 17:07:34.112102 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" event={"ID":"1314b54d-d592-40e5-909c-179fbc624d8e","Type":"ContainerStarted","Data":"f752e7a97a0218809a17e3a8e96484ea15642c0b69493fa2ea47696eaab83503"} Mar 19 17:07:34 crc kubenswrapper[4792]: I0319 17:07:34.147388 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c5884965b-5vqgk" event={"ID":"906667a8-fd5c-499a-9e1c-6fc52661d893","Type":"ContainerStarted","Data":"99050fe440fc5afe0b00ca9dc3779df779d4180521a423150f356bed45cb81e5"} Mar 19 17:07:34 crc kubenswrapper[4792]: I0319 17:07:34.160065 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-678bf6fff8-vd4c4" event={"ID":"03771a31-1a24-4d59-a92a-31f89f9bc89d","Type":"ContainerStarted","Data":"d81672c760a433672a36c5a2817383ea58784a589948dbd29f61714de2cabf76"} Mar 19 17:07:34 crc kubenswrapper[4792]: I0319 17:07:34.160961 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:34 crc kubenswrapper[4792]: I0319 17:07:34.161005 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:34 crc kubenswrapper[4792]: I0319 17:07:34.187493 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-678bf6fff8-vd4c4" podStartSLOduration=7.1874787 podStartE2EDuration="7.1874787s" podCreationTimestamp="2026-03-19 17:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:34.180512149 +0000 UTC m=+1617.326569689" watchObservedRunningTime="2026-03-19 17:07:34.1874787 +0000 UTC m=+1617.333536240" Mar 19 17:07:34 crc kubenswrapper[4792]: I0319 17:07:34.188089 4792 generic.go:334] "Generic (PLEG): container finished" podID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerID="1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53" exitCode=143 Mar 19 17:07:34 crc kubenswrapper[4792]: I0319 17:07:34.188393 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5789c5b8cd-gst5f" event={"ID":"aa59a063-31ae-41e0-86a5-020f60d0113a","Type":"ContainerDied","Data":"1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.202400 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" event={"ID":"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f","Type":"ContainerStarted","Data":"a10c8972ff19f8eeb5ba4f16bb0970c30cdc0f8800616f850554d4266b3443c3"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.203055 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" event={"ID":"2b323aac-f5a2-4adf-8d27-3c1b194b3b3f","Type":"ContainerStarted","Data":"b3006246d5c230bcc0de37cf85d2415d49f7cccf065bf58960850dd27e3e1234"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.217234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c5884965b-5vqgk" event={"ID":"906667a8-fd5c-499a-9e1c-6fc52661d893","Type":"ContainerStarted","Data":"33a522928db11764e67a6cf6f11015b953ccee6f9526f804885c2556042f5705"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.217287 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c5884965b-5vqgk" event={"ID":"906667a8-fd5c-499a-9e1c-6fc52661d893","Type":"ContainerStarted","Data":"d9d5db643bb4096e21a1731c51339826bb8636c0a018e26396a9b95bd1ffa0ba"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.217338 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.217438 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.233950 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6766d67c9f-bwz6s" event={"ID":"053f5f35-2164-43eb-9223-f36a5de46700","Type":"ContainerStarted","Data":"181f406aa1e0014f58904f4530b35f37c423edc5d5a40be9ade7730416806b44"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.233990 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6766d67c9f-bwz6s" event={"ID":"053f5f35-2164-43eb-9223-f36a5de46700","Type":"ContainerStarted","Data":"f955e4a1942b31a4e4d8fdb348b2d756877dec1e9f9e87865fc8746baf06be0c"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.235401 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6bf88c4df4-qpgln" podStartSLOduration=4.746405929 podStartE2EDuration="8.235383727s" podCreationTimestamp="2026-03-19 17:07:27 +0000 UTC" firstStartedPulling="2026-03-19 17:07:30.149657086 +0000 UTC m=+1613.295714626" lastFinishedPulling="2026-03-19 17:07:33.638634884 +0000 UTC m=+1616.784692424" observedRunningTime="2026-03-19 17:07:35.223965593 +0000 UTC m=+1618.370023163" watchObservedRunningTime="2026-03-19 17:07:35.235383727 +0000 UTC m=+1618.381441267" Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.250108 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" event={"ID":"1314b54d-d592-40e5-909c-179fbc624d8e","Type":"ContainerStarted","Data":"1b82dfb0727719e573aba2437c2ad3efe19704e253f517464fa3fbc41b5dc558"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.278232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" event={"ID":"f8e3b8f7-f805-459b-8669-c2b0d77b2cea","Type":"ContainerStarted","Data":"81773eeb12d814f728a8efb12809dfa90689043addaf70e9b66b5aa382750703"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.278277 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" event={"ID":"f8e3b8f7-f805-459b-8669-c2b0d77b2cea","Type":"ContainerStarted","Data":"5a520531165cfc7e3d6f562479e7057e13c86e9880f92f7b206d9bc252156cdb"} Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.318545 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c5884965b-5vqgk" podStartSLOduration=4.318520299 podStartE2EDuration="4.318520299s" podCreationTimestamp="2026-03-19 17:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:35.253398601 +0000 UTC m=+1618.399456141" watchObservedRunningTime="2026-03-19 17:07:35.318520299 +0000 UTC m=+1618.464577859" Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.372550 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" podStartSLOduration=3.925928776 podStartE2EDuration="8.372532632s" podCreationTimestamp="2026-03-19 17:07:27 +0000 UTC" firstStartedPulling="2026-03-19 17:07:29.18157946 +0000 UTC m=+1612.327637000" lastFinishedPulling="2026-03-19 17:07:33.628183306 +0000 UTC m=+1616.774240856" observedRunningTime="2026-03-19 17:07:35.278239293 +0000 UTC m=+1618.424296833" watchObservedRunningTime="2026-03-19 17:07:35.372532632 +0000 UTC m=+1618.518590172" Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.372833 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6f947cc86b-s9rw6"] Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.381294 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6766d67c9f-bwz6s" podStartSLOduration=4.615932666 podStartE2EDuration="8.381261901s" podCreationTimestamp="2026-03-19 17:07:27 +0000 UTC" firstStartedPulling="2026-03-19 17:07:29.861990948 +0000 UTC m=+1613.008048488" lastFinishedPulling="2026-03-19 17:07:33.627320183 +0000 UTC m=+1616.773377723" observedRunningTime="2026-03-19 17:07:35.311369032 +0000 UTC m=+1618.457426582" watchObservedRunningTime="2026-03-19 17:07:35.381261901 +0000 UTC m=+1618.527319441" Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.398277 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5c8bf6b7df-4wwcd"] Mar 19 17:07:35 crc kubenswrapper[4792]: I0319 17:07:35.408715 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" podStartSLOduration=3.93954863 podStartE2EDuration="8.408698475s" podCreationTimestamp="2026-03-19 17:07:27 +0000 UTC" firstStartedPulling="2026-03-19 17:07:29.081650007 +0000 UTC m=+1612.227707547" lastFinishedPulling="2026-03-19 17:07:33.550799862 +0000 UTC m=+1616.696857392" observedRunningTime="2026-03-19 17:07:35.338007173 +0000 UTC m=+1618.484064713" watchObservedRunningTime="2026-03-19 17:07:35.408698475 +0000 UTC m=+1618.554756015" Mar 19 17:07:36 crc kubenswrapper[4792]: I0319 17:07:36.257607 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.259102 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.298801 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" containerName="barbican-worker-log" containerID="cri-o://f752e7a97a0218809a17e3a8e96484ea15642c0b69493fa2ea47696eaab83503" gracePeriod=30 Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.299297 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" containerName="barbican-worker" containerID="cri-o://1b82dfb0727719e573aba2437c2ad3efe19704e253f517464fa3fbc41b5dc558" gracePeriod=30 Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.299299 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerName="barbican-keystone-listener-log" containerID="cri-o://5a520531165cfc7e3d6f562479e7057e13c86e9880f92f7b206d9bc252156cdb" gracePeriod=30 Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.299596 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerName="barbican-keystone-listener" containerID="cri-o://81773eeb12d814f728a8efb12809dfa90689043addaf70e9b66b5aa382750703" gracePeriod=30 Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.498373 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-75d8cc585d-x4dns"] Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.500376 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.533331 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75d8cc585d-x4dns"] Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.672657 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-logs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.672818 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-public-tls-certs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.672875 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-internal-tls-certs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.672977 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqqzj\" (UniqueName: \"kubernetes.io/projected/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-kube-api-access-lqqzj\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.673032 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-scripts\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.673108 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-combined-ca-bundle\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.673152 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-config-data\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.781788 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-logs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.781937 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-logs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.782023 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-public-tls-certs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.782074 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-internal-tls-certs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.782246 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqqzj\" (UniqueName: \"kubernetes.io/projected/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-kube-api-access-lqqzj\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.782572 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-scripts\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.783063 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-combined-ca-bundle\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.783245 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-config-data\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.788178 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-scripts\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.788583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-combined-ca-bundle\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.788863 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-config-data\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.790682 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-public-tls-certs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.792621 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-internal-tls-certs\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:37 crc kubenswrapper[4792]: I0319 17:07:37.854285 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqqzj\" (UniqueName: \"kubernetes.io/projected/8ccda1e8-2f07-47f8-9887-c72ae0b11c89-kube-api-access-lqqzj\") pod \"placement-75d8cc585d-x4dns\" (UID: \"8ccda1e8-2f07-47f8-9887-c72ae0b11c89\") " pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.142000 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.311936 4792 generic.go:334] "Generic (PLEG): container finished" podID="1314b54d-d592-40e5-909c-179fbc624d8e" containerID="f752e7a97a0218809a17e3a8e96484ea15642c0b69493fa2ea47696eaab83503" exitCode=143 Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.312012 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" event={"ID":"1314b54d-d592-40e5-909c-179fbc624d8e","Type":"ContainerDied","Data":"f752e7a97a0218809a17e3a8e96484ea15642c0b69493fa2ea47696eaab83503"} Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.314411 4792 generic.go:334] "Generic (PLEG): container finished" podID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerID="81773eeb12d814f728a8efb12809dfa90689043addaf70e9b66b5aa382750703" exitCode=0 Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.314432 4792 generic.go:334] "Generic (PLEG): container finished" podID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerID="5a520531165cfc7e3d6f562479e7057e13c86e9880f92f7b206d9bc252156cdb" exitCode=143 Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.314450 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" event={"ID":"f8e3b8f7-f805-459b-8669-c2b0d77b2cea","Type":"ContainerDied","Data":"81773eeb12d814f728a8efb12809dfa90689043addaf70e9b66b5aa382750703"} Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.314469 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" event={"ID":"f8e3b8f7-f805-459b-8669-c2b0d77b2cea","Type":"ContainerDied","Data":"5a520531165cfc7e3d6f562479e7057e13c86e9880f92f7b206d9bc252156cdb"} Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.515865 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.599871 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-db8kg"] Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.600154 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" containerName="dnsmasq-dns" containerID="cri-o://909a160e85928f77c1041e55b6656ff30300bc14545f7cc3172eb1b63e289a98" gracePeriod=10 Mar 19 17:07:38 crc kubenswrapper[4792]: I0319 17:07:38.644561 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:39 crc kubenswrapper[4792]: I0319 17:07:39.332415 4792 generic.go:334] "Generic (PLEG): container finished" podID="a519499d-858b-46d9-81d6-22b3c58eceab" containerID="909a160e85928f77c1041e55b6656ff30300bc14545f7cc3172eb1b63e289a98" exitCode=0 Mar 19 17:07:39 crc kubenswrapper[4792]: I0319 17:07:39.332579 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" event={"ID":"a519499d-858b-46d9-81d6-22b3c58eceab","Type":"ContainerDied","Data":"909a160e85928f77c1041e55b6656ff30300bc14545f7cc3172eb1b63e289a98"} Mar 19 17:07:39 crc kubenswrapper[4792]: I0319 17:07:39.336303 4792 generic.go:334] "Generic (PLEG): container finished" podID="1314b54d-d592-40e5-909c-179fbc624d8e" containerID="1b82dfb0727719e573aba2437c2ad3efe19704e253f517464fa3fbc41b5dc558" exitCode=0 Mar 19 17:07:39 crc kubenswrapper[4792]: I0319 17:07:39.336340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" event={"ID":"1314b54d-d592-40e5-909c-179fbc624d8e","Type":"ContainerDied","Data":"1b82dfb0727719e573aba2437c2ad3efe19704e253f517464fa3fbc41b5dc558"} Mar 19 17:07:40 crc kubenswrapper[4792]: I0319 17:07:40.252511 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:40 crc kubenswrapper[4792]: I0319 17:07:40.290258 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:41 crc kubenswrapper[4792]: I0319 17:07:41.203517 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:07:42 crc kubenswrapper[4792]: I0319 17:07:42.524223 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: connect: connection refused" Mar 19 17:07:43 crc kubenswrapper[4792]: I0319 17:07:43.371518 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:43 crc kubenswrapper[4792]: I0319 17:07:43.429245 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c5884965b-5vqgk" Mar 19 17:07:43 crc kubenswrapper[4792]: I0319 17:07:43.495380 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-678bf6fff8-vd4c4"] Mar 19 17:07:43 crc kubenswrapper[4792]: I0319 17:07:43.495592 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-678bf6fff8-vd4c4" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api-log" containerID="cri-o://11d98627ef0b353d3a6d480e83afaa5c50c99a2db1155bff44699a3e9f3a1a3e" gracePeriod=30 Mar 19 17:07:43 crc kubenswrapper[4792]: I0319 17:07:43.496161 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-678bf6fff8-vd4c4" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api" containerID="cri-o://d81672c760a433672a36c5a2817383ea58784a589948dbd29f61714de2cabf76" gracePeriod=30 Mar 19 17:07:45 crc kubenswrapper[4792]: I0319 17:07:45.741263 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:07:45 crc kubenswrapper[4792]: E0319 17:07:45.742083 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:07:46 crc kubenswrapper[4792]: I0319 17:07:46.430247 4792 generic.go:334] "Generic (PLEG): container finished" podID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerID="11d98627ef0b353d3a6d480e83afaa5c50c99a2db1155bff44699a3e9f3a1a3e" exitCode=143 Mar 19 17:07:46 crc kubenswrapper[4792]: I0319 17:07:46.430312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-678bf6fff8-vd4c4" event={"ID":"03771a31-1a24-4d59-a92a-31f89f9bc89d","Type":"ContainerDied","Data":"11d98627ef0b353d3a6d480e83afaa5c50c99a2db1155bff44699a3e9f3a1a3e"} Mar 19 17:07:46 crc kubenswrapper[4792]: I0319 17:07:46.935399 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-678bf6fff8-vd4c4" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": read tcp 10.217.0.2:36398->10.217.0.210:9311: read: connection reset by peer" Mar 19 17:07:46 crc kubenswrapper[4792]: I0319 17:07:46.935415 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-678bf6fff8-vd4c4" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": read tcp 10.217.0.2:36394->10.217.0.210:9311: read: connection reset by peer" Mar 19 17:07:47 crc kubenswrapper[4792]: I0319 17:07:47.441630 4792 generic.go:334] "Generic (PLEG): container finished" podID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerID="d81672c760a433672a36c5a2817383ea58784a589948dbd29f61714de2cabf76" exitCode=0 Mar 19 17:07:47 crc kubenswrapper[4792]: I0319 17:07:47.441709 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-678bf6fff8-vd4c4" event={"ID":"03771a31-1a24-4d59-a92a-31f89f9bc89d","Type":"ContainerDied","Data":"d81672c760a433672a36c5a2817383ea58784a589948dbd29f61714de2cabf76"} Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.355390 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.453215 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkccc\" (UniqueName: \"kubernetes.io/projected/a519499d-858b-46d9-81d6-22b3c58eceab-kube-api-access-tkccc\") pod \"a519499d-858b-46d9-81d6-22b3c58eceab\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.453379 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-swift-storage-0\") pod \"a519499d-858b-46d9-81d6-22b3c58eceab\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.453404 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-config\") pod \"a519499d-858b-46d9-81d6-22b3c58eceab\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.453439 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-sb\") pod \"a519499d-858b-46d9-81d6-22b3c58eceab\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.453520 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-svc\") pod \"a519499d-858b-46d9-81d6-22b3c58eceab\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.453608 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-nb\") pod \"a519499d-858b-46d9-81d6-22b3c58eceab\" (UID: \"a519499d-858b-46d9-81d6-22b3c58eceab\") " Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.501473 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a519499d-858b-46d9-81d6-22b3c58eceab-kube-api-access-tkccc" (OuterVolumeSpecName: "kube-api-access-tkccc") pod "a519499d-858b-46d9-81d6-22b3c58eceab" (UID: "a519499d-858b-46d9-81d6-22b3c58eceab"). InnerVolumeSpecName "kube-api-access-tkccc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.518567 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" event={"ID":"a519499d-858b-46d9-81d6-22b3c58eceab","Type":"ContainerDied","Data":"18a4ae80c54b60a5e4df91ad9048a1ef4644b55a157b0924f4f824ce14ae2472"} Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.518831 4792 scope.go:117] "RemoveContainer" containerID="909a160e85928f77c1041e55b6656ff30300bc14545f7cc3172eb1b63e289a98" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.519059 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.554795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a519499d-858b-46d9-81d6-22b3c58eceab" (UID: "a519499d-858b-46d9-81d6-22b3c58eceab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.558163 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkccc\" (UniqueName: \"kubernetes.io/projected/a519499d-858b-46d9-81d6-22b3c58eceab-kube-api-access-tkccc\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.558831 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.571423 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-config" (OuterVolumeSpecName: "config") pod "a519499d-858b-46d9-81d6-22b3c58eceab" (UID: "a519499d-858b-46d9-81d6-22b3c58eceab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.573178 4792 scope.go:117] "RemoveContainer" containerID="fb820acd1f991bc2cf37cf27892e4772a7a78f29e750504399be7a685b8fb10e" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.580895 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a519499d-858b-46d9-81d6-22b3c58eceab" (UID: "a519499d-858b-46d9-81d6-22b3c58eceab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:48 crc kubenswrapper[4792]: E0319 17:07:48.615029 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Mar 19 17:07:48 crc kubenswrapper[4792]: E0319 17:07:48.615555 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt2ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 19 17:07:48 crc kubenswrapper[4792]: E0319 17:07:48.617004 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.617817 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a519499d-858b-46d9-81d6-22b3c58eceab" (UID: "a519499d-858b-46d9-81d6-22b3c58eceab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.628085 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a519499d-858b-46d9-81d6-22b3c58eceab" (UID: "a519499d-858b-46d9-81d6-22b3c58eceab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.663648 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.663678 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.663687 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.663696 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a519499d-858b-46d9-81d6-22b3c58eceab-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.885964 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-db8kg"] Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.901667 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-db8kg"] Mar 19 17:07:48 crc kubenswrapper[4792]: I0319 17:07:48.940907 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75d8cc585d-x4dns"] Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.059702 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.063037 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.087907 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.184205 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-combined-ca-bundle\") pod \"1314b54d-d592-40e5-909c-179fbc624d8e\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.184672 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-combined-ca-bundle\") pod \"03771a31-1a24-4d59-a92a-31f89f9bc89d\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.184709 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data-custom\") pod \"1314b54d-d592-40e5-909c-179fbc624d8e\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.184854 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk9s2\" (UniqueName: \"kubernetes.io/projected/1314b54d-d592-40e5-909c-179fbc624d8e-kube-api-access-gk9s2\") pod \"1314b54d-d592-40e5-909c-179fbc624d8e\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185038 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data\") pod \"1314b54d-d592-40e5-909c-179fbc624d8e\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185107 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1314b54d-d592-40e5-909c-179fbc624d8e-logs\") pod \"1314b54d-d592-40e5-909c-179fbc624d8e\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185178 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data-custom\") pod \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185207 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5gds\" (UniqueName: \"kubernetes.io/projected/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-kube-api-access-c5gds\") pod \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185237 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data-custom\") pod \"03771a31-1a24-4d59-a92a-31f89f9bc89d\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185258 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data\") pod \"03771a31-1a24-4d59-a92a-31f89f9bc89d\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185328 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-logs\") pod \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185464 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data\") pod \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185501 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl5fm\" (UniqueName: \"kubernetes.io/projected/03771a31-1a24-4d59-a92a-31f89f9bc89d-kube-api-access-cl5fm\") pod \"03771a31-1a24-4d59-a92a-31f89f9bc89d\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185536 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-combined-ca-bundle\") pod \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\" (UID: \"f8e3b8f7-f805-459b-8669-c2b0d77b2cea\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.185566 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03771a31-1a24-4d59-a92a-31f89f9bc89d-logs\") pod \"03771a31-1a24-4d59-a92a-31f89f9bc89d\" (UID: \"03771a31-1a24-4d59-a92a-31f89f9bc89d\") " Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.187147 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03771a31-1a24-4d59-a92a-31f89f9bc89d-logs" (OuterVolumeSpecName: "logs") pod "03771a31-1a24-4d59-a92a-31f89f9bc89d" (UID: "03771a31-1a24-4d59-a92a-31f89f9bc89d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.189738 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1314b54d-d592-40e5-909c-179fbc624d8e-logs" (OuterVolumeSpecName: "logs") pod "1314b54d-d592-40e5-909c-179fbc624d8e" (UID: "1314b54d-d592-40e5-909c-179fbc624d8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.189911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-logs" (OuterVolumeSpecName: "logs") pod "f8e3b8f7-f805-459b-8669-c2b0d77b2cea" (UID: "f8e3b8f7-f805-459b-8669-c2b0d77b2cea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.191384 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1314b54d-d592-40e5-909c-179fbc624d8e" (UID: "1314b54d-d592-40e5-909c-179fbc624d8e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.196684 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "03771a31-1a24-4d59-a92a-31f89f9bc89d" (UID: "03771a31-1a24-4d59-a92a-31f89f9bc89d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.200444 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f8e3b8f7-f805-459b-8669-c2b0d77b2cea" (UID: "f8e3b8f7-f805-459b-8669-c2b0d77b2cea"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.200513 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1314b54d-d592-40e5-909c-179fbc624d8e-kube-api-access-gk9s2" (OuterVolumeSpecName: "kube-api-access-gk9s2") pod "1314b54d-d592-40e5-909c-179fbc624d8e" (UID: "1314b54d-d592-40e5-909c-179fbc624d8e"). InnerVolumeSpecName "kube-api-access-gk9s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.205191 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-kube-api-access-c5gds" (OuterVolumeSpecName: "kube-api-access-c5gds") pod "f8e3b8f7-f805-459b-8669-c2b0d77b2cea" (UID: "f8e3b8f7-f805-459b-8669-c2b0d77b2cea"). InnerVolumeSpecName "kube-api-access-c5gds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.217670 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03771a31-1a24-4d59-a92a-31f89f9bc89d-kube-api-access-cl5fm" (OuterVolumeSpecName: "kube-api-access-cl5fm") pod "03771a31-1a24-4d59-a92a-31f89f9bc89d" (UID: "03771a31-1a24-4d59-a92a-31f89f9bc89d"). InnerVolumeSpecName "kube-api-access-cl5fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.251571 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8e3b8f7-f805-459b-8669-c2b0d77b2cea" (UID: "f8e3b8f7-f805-459b-8669-c2b0d77b2cea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.278155 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data" (OuterVolumeSpecName: "config-data") pod "03771a31-1a24-4d59-a92a-31f89f9bc89d" (UID: "03771a31-1a24-4d59-a92a-31f89f9bc89d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.281686 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03771a31-1a24-4d59-a92a-31f89f9bc89d" (UID: "03771a31-1a24-4d59-a92a-31f89f9bc89d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.290021 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data" (OuterVolumeSpecName: "config-data") pod "1314b54d-d592-40e5-909c-179fbc624d8e" (UID: "1314b54d-d592-40e5-909c-179fbc624d8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.290075 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1314b54d-d592-40e5-909c-179fbc624d8e" (UID: "1314b54d-d592-40e5-909c-179fbc624d8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.290911 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data\") pod \"1314b54d-d592-40e5-909c-179fbc624d8e\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " Mar 19 17:07:49 crc kubenswrapper[4792]: W0319 17:07:49.291010 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1314b54d-d592-40e5-909c-179fbc624d8e/volumes/kubernetes.io~secret/config-data Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.291021 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data" (OuterVolumeSpecName: "config-data") pod "1314b54d-d592-40e5-909c-179fbc624d8e" (UID: "1314b54d-d592-40e5-909c-179fbc624d8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.291223 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-combined-ca-bundle\") pod \"1314b54d-d592-40e5-909c-179fbc624d8e\" (UID: \"1314b54d-d592-40e5-909c-179fbc624d8e\") " Mar 19 17:07:49 crc kubenswrapper[4792]: W0319 17:07:49.291333 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1314b54d-d592-40e5-909c-179fbc624d8e/volumes/kubernetes.io~secret/combined-ca-bundle Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.291347 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1314b54d-d592-40e5-909c-179fbc624d8e" (UID: "1314b54d-d592-40e5-909c-179fbc624d8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.291986 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292009 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1314b54d-d592-40e5-909c-179fbc624d8e-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292018 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292057 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5gds\" (UniqueName: \"kubernetes.io/projected/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-kube-api-access-c5gds\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292067 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292076 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292084 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292094 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl5fm\" (UniqueName: \"kubernetes.io/projected/03771a31-1a24-4d59-a92a-31f89f9bc89d-kube-api-access-cl5fm\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292102 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292131 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03771a31-1a24-4d59-a92a-31f89f9bc89d-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292140 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292149 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03771a31-1a24-4d59-a92a-31f89f9bc89d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292156 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1314b54d-d592-40e5-909c-179fbc624d8e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.292179 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk9s2\" (UniqueName: \"kubernetes.io/projected/1314b54d-d592-40e5-909c-179fbc624d8e-kube-api-access-gk9s2\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.318758 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data" (OuterVolumeSpecName: "config-data") pod "f8e3b8f7-f805-459b-8669-c2b0d77b2cea" (UID: "f8e3b8f7-f805-459b-8669-c2b0d77b2cea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.394085 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e3b8f7-f805-459b-8669-c2b0d77b2cea-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.531491 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-678bf6fff8-vd4c4" event={"ID":"03771a31-1a24-4d59-a92a-31f89f9bc89d","Type":"ContainerDied","Data":"0ddd27c61147711561db2e7a7361fa0e41051f108c0d6789796cf2c2b0030f83"} Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.531515 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-678bf6fff8-vd4c4" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.531922 4792 scope.go:117] "RemoveContainer" containerID="d81672c760a433672a36c5a2817383ea58784a589948dbd29f61714de2cabf76" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.533445 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.533984 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c8bf6b7df-4wwcd" event={"ID":"1314b54d-d592-40e5-909c-179fbc624d8e","Type":"ContainerDied","Data":"a5571fa9ce7de71a22cf4956e4306e395aeaa2ce63dd7582976d9e219d6771f6"} Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.536345 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.536339 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f947cc86b-s9rw6" event={"ID":"f8e3b8f7-f805-459b-8669-c2b0d77b2cea","Type":"ContainerDied","Data":"b18dc4e4b43c6528e38332bf56cd0852de08ab9bbff7070d85d6aca1991cc604"} Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.540582 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75d8cc585d-x4dns" event={"ID":"8ccda1e8-2f07-47f8-9887-c72ae0b11c89","Type":"ContainerStarted","Data":"cf3acfcef8c2ac29d9f923c289d0e4310cd348566b73461b67cc151cf8b6f8c7"} Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.540692 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerName="ceilometer-notification-agent" containerID="cri-o://3c523ba07fe1239f7ecc3166c35834f363351179822199a3716cb6e18c0d4bd9" gracePeriod=30 Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.540833 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerName="sg-core" containerID="cri-o://8836aac468233d2dab1a73339d5a3de6126fc6cbe62b9330d484401e592d78f3" gracePeriod=30 Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.566950 4792 scope.go:117] "RemoveContainer" containerID="11d98627ef0b353d3a6d480e83afaa5c50c99a2db1155bff44699a3e9f3a1a3e" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.599322 4792 scope.go:117] "RemoveContainer" containerID="1b82dfb0727719e573aba2437c2ad3efe19704e253f517464fa3fbc41b5dc558" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.603365 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6f947cc86b-s9rw6"] Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.620071 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-6f947cc86b-s9rw6"] Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.625873 4792 scope.go:117] "RemoveContainer" containerID="f752e7a97a0218809a17e3a8e96484ea15642c0b69493fa2ea47696eaab83503" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.630461 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5c8bf6b7df-4wwcd"] Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.642916 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5c8bf6b7df-4wwcd"] Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.652949 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-678bf6fff8-vd4c4"] Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.653342 4792 scope.go:117] "RemoveContainer" containerID="81773eeb12d814f728a8efb12809dfa90689043addaf70e9b66b5aa382750703" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.664770 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-678bf6fff8-vd4c4"] Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.722369 4792 scope.go:117] "RemoveContainer" containerID="5a520531165cfc7e3d6f562479e7057e13c86e9880f92f7b206d9bc252156cdb" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.753641 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" path="/var/lib/kubelet/pods/03771a31-1a24-4d59-a92a-31f89f9bc89d/volumes" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.754334 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" path="/var/lib/kubelet/pods/1314b54d-d592-40e5-909c-179fbc624d8e/volumes" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.754971 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" path="/var/lib/kubelet/pods/a519499d-858b-46d9-81d6-22b3c58eceab/volumes" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.756080 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" path="/var/lib/kubelet/pods/f8e3b8f7-f805-459b-8669-c2b0d77b2cea/volumes" Mar 19 17:07:49 crc kubenswrapper[4792]: I0319 17:07:49.868554 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.116884 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65c9569ddf-24zz6"] Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.117172 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65c9569ddf-24zz6" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-api" containerID="cri-o://e186b30476926b21afbd7781efe2b3fce255f4665739e11011daa22ce0f54925" gracePeriod=30 Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.117551 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65c9569ddf-24zz6" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-httpd" containerID="cri-o://531d789062b377ce6f3ad30c627bcb3aee6ab58f003ac30243da3993acc8b218" gracePeriod=30 Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.160643 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-678f6bc965-29ckw"] Mar 19 17:07:50 crc kubenswrapper[4792]: E0319 17:07:50.162269 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerName="barbican-keystone-listener" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.162398 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerName="barbican-keystone-listener" Mar 19 17:07:50 crc kubenswrapper[4792]: E0319 17:07:50.162488 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" containerName="barbican-worker-log" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.162559 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" containerName="barbican-worker-log" Mar 19 17:07:50 crc kubenswrapper[4792]: E0319 17:07:50.162628 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" containerName="barbican-worker" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.163025 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" containerName="barbican-worker" Mar 19 17:07:50 crc kubenswrapper[4792]: E0319 17:07:50.163105 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api-log" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.163186 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api-log" Mar 19 17:07:50 crc kubenswrapper[4792]: E0319 17:07:50.163277 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.163354 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api" Mar 19 17:07:50 crc kubenswrapper[4792]: E0319 17:07:50.163431 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" containerName="init" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.163502 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" containerName="init" Mar 19 17:07:50 crc kubenswrapper[4792]: E0319 17:07:50.163579 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerName="barbican-keystone-listener-log" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.163703 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerName="barbican-keystone-listener-log" Mar 19 17:07:50 crc kubenswrapper[4792]: E0319 17:07:50.163813 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" containerName="dnsmasq-dns" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.163908 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" containerName="dnsmasq-dns" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.164353 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" containerName="barbican-worker" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.164469 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1314b54d-d592-40e5-909c-179fbc624d8e" containerName="barbican-worker-log" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.164554 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerName="barbican-keystone-listener-log" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.164640 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e3b8f7-f805-459b-8669-c2b0d77b2cea" containerName="barbican-keystone-listener" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.164730 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api-log" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.165063 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" containerName="dnsmasq-dns" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.165183 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.166896 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.200062 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-678f6bc965-29ckw"] Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.321450 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-combined-ca-bundle\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.321553 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-public-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.321632 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-internal-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.321747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdp4\" (UniqueName: \"kubernetes.io/projected/b44482c7-fbab-40ba-b3ea-44a568d31edd-kube-api-access-kmdp4\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.321793 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-config\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.321861 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-httpd-config\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.321888 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-ovndb-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.423972 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-config\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.424069 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-httpd-config\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.424109 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-ovndb-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.424172 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-combined-ca-bundle\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.424232 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-public-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.424314 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-internal-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.424431 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmdp4\" (UniqueName: \"kubernetes.io/projected/b44482c7-fbab-40ba-b3ea-44a568d31edd-kube-api-access-kmdp4\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.428474 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-public-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.428589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-httpd-config\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.428636 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-ovndb-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.428980 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-combined-ca-bundle\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.429663 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-config\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.432623 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44482c7-fbab-40ba-b3ea-44a568d31edd-internal-tls-certs\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.448017 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmdp4\" (UniqueName: \"kubernetes.io/projected/b44482c7-fbab-40ba-b3ea-44a568d31edd-kube-api-access-kmdp4\") pod \"neutron-678f6bc965-29ckw\" (UID: \"b44482c7-fbab-40ba-b3ea-44a568d31edd\") " pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.498789 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.558121 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75d8cc585d-x4dns" event={"ID":"8ccda1e8-2f07-47f8-9887-c72ae0b11c89","Type":"ContainerStarted","Data":"3f577911eda932d120419393c0b609a9e9ce6b10ba6a3783ced6c59aeb6401fd"} Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.558176 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75d8cc585d-x4dns" event={"ID":"8ccda1e8-2f07-47f8-9887-c72ae0b11c89","Type":"ContainerStarted","Data":"f2ec8f6402131d2cb7c9c5f13c17d12dd8c83912ab7520f80ea78a3108466563"} Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.558252 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.558295 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.562855 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerID="8836aac468233d2dab1a73339d5a3de6126fc6cbe62b9330d484401e592d78f3" exitCode=2 Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.562882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373","Type":"ContainerDied","Data":"8836aac468233d2dab1a73339d5a3de6126fc6cbe62b9330d484401e592d78f3"} Mar 19 17:07:50 crc kubenswrapper[4792]: I0319 17:07:50.601995 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-75d8cc585d-x4dns" podStartSLOduration=13.601966013 podStartE2EDuration="13.601966013s" podCreationTimestamp="2026-03-19 17:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:50.580479684 +0000 UTC m=+1633.726537244" watchObservedRunningTime="2026-03-19 17:07:50.601966013 +0000 UTC m=+1633.748023553" Mar 19 17:07:51 crc kubenswrapper[4792]: I0319 17:07:51.152564 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-678f6bc965-29ckw"] Mar 19 17:07:51 crc kubenswrapper[4792]: W0319 17:07:51.154715 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb44482c7_fbab_40ba_b3ea_44a568d31edd.slice/crio-c266a41900e508f2647c6d30f890220f90921ce88868be4ccc638b6db6e518be WatchSource:0}: Error finding container c266a41900e508f2647c6d30f890220f90921ce88868be4ccc638b6db6e518be: Status 404 returned error can't find the container with id c266a41900e508f2647c6d30f890220f90921ce88868be4ccc638b6db6e518be Mar 19 17:07:51 crc kubenswrapper[4792]: I0319 17:07:51.593339 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678f6bc965-29ckw" event={"ID":"b44482c7-fbab-40ba-b3ea-44a568d31edd","Type":"ContainerStarted","Data":"933ae401ab76e4c126ca202691300b571b91013efb5e8cdfb279b31000e6090d"} Mar 19 17:07:51 crc kubenswrapper[4792]: I0319 17:07:51.593808 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678f6bc965-29ckw" event={"ID":"b44482c7-fbab-40ba-b3ea-44a568d31edd","Type":"ContainerStarted","Data":"c266a41900e508f2647c6d30f890220f90921ce88868be4ccc638b6db6e518be"} Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.524526 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-db8kg" podUID="a519499d-858b-46d9-81d6-22b3c58eceab" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: i/o timeout" Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.606118 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678f6bc965-29ckw" event={"ID":"b44482c7-fbab-40ba-b3ea-44a568d31edd","Type":"ContainerStarted","Data":"13cab47a474c7ba25bbe1221d37ef21640f8386845b99a6daeeb63fa3b5bc51c"} Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.607087 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.609960 4792 generic.go:334] "Generic (PLEG): container finished" podID="cdaaa799-71ff-429b-86fe-bbe4e903984f" containerID="cdab2a29e594bc9da55ff2d3e3cbed2d7331b9e2287aff1b5bfd36d28b40af03" exitCode=0 Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.610020 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r6f9z" event={"ID":"cdaaa799-71ff-429b-86fe-bbe4e903984f","Type":"ContainerDied","Data":"cdab2a29e594bc9da55ff2d3e3cbed2d7331b9e2287aff1b5bfd36d28b40af03"} Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.616671 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerID="3c523ba07fe1239f7ecc3166c35834f363351179822199a3716cb6e18c0d4bd9" exitCode=0 Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.616717 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373","Type":"ContainerDied","Data":"3c523ba07fe1239f7ecc3166c35834f363351179822199a3716cb6e18c0d4bd9"} Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.639279 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-678f6bc965-29ckw" podStartSLOduration=2.63924496 podStartE2EDuration="2.63924496s" podCreationTimestamp="2026-03-19 17:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:52.628814624 +0000 UTC m=+1635.774872164" watchObservedRunningTime="2026-03-19 17:07:52.63924496 +0000 UTC m=+1635.785302490" Mar 19 17:07:52 crc kubenswrapper[4792]: I0319 17:07:52.982775 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.089924 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-scripts\") pod \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.090114 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-log-httpd\") pod \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.090206 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-combined-ca-bundle\") pod \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.090227 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-sg-core-conf-yaml\") pod \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.090292 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-run-httpd\") pod \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.090336 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt2ff\" (UniqueName: \"kubernetes.io/projected/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-kube-api-access-zt2ff\") pod \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.090355 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-config-data\") pod \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\" (UID: \"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373\") " Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.090823 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" (UID: "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.090936 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.091152 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" (UID: "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.099175 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-scripts" (OuterVolumeSpecName: "scripts") pod "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" (UID: "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.099370 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-kube-api-access-zt2ff" (OuterVolumeSpecName: "kube-api-access-zt2ff") pod "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" (UID: "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373"). InnerVolumeSpecName "kube-api-access-zt2ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.132297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" (UID: "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.132757 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-config-data" (OuterVolumeSpecName: "config-data") pod "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" (UID: "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.135266 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" (UID: "3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.192915 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.192952 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.192964 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.192973 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt2ff\" (UniqueName: \"kubernetes.io/projected/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-kube-api-access-zt2ff\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.192982 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.192993 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.628252 4792 generic.go:334] "Generic (PLEG): container finished" podID="ef634102-a683-498b-ad98-61d470f7fefa" containerID="be1d7532550a7578e71a2043fef89ab8d93bef6083d985c1c48af294bd01a3c6" exitCode=0 Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.628359 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6ftwc" event={"ID":"ef634102-a683-498b-ad98-61d470f7fefa","Type":"ContainerDied","Data":"be1d7532550a7578e71a2043fef89ab8d93bef6083d985c1c48af294bd01a3c6"} Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.631962 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373","Type":"ContainerDied","Data":"9270f2a92bd4c1c18b99092f6c623f1580eff48588bbba81163399c1a2882500"} Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.632020 4792 scope.go:117] "RemoveContainer" containerID="8836aac468233d2dab1a73339d5a3de6126fc6cbe62b9330d484401e592d78f3" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.632022 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.718210 4792 scope.go:117] "RemoveContainer" containerID="3c523ba07fe1239f7ecc3166c35834f363351179822199a3716cb6e18c0d4bd9" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.720524 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.769544 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.782915 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:07:53 crc kubenswrapper[4792]: E0319 17:07:53.783420 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerName="ceilometer-notification-agent" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.783435 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerName="ceilometer-notification-agent" Mar 19 17:07:53 crc kubenswrapper[4792]: E0319 17:07:53.783450 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerName="sg-core" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.783455 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerName="sg-core" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.783664 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerName="sg-core" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.783682 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" containerName="ceilometer-notification-agent" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.785948 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.795608 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.811234 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.821014 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.914651 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.914697 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.914735 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-config-data\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.914860 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-run-httpd\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.914883 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk6jj\" (UniqueName: \"kubernetes.io/projected/ccfb9f12-84fa-412b-900d-d254cf4303dc-kube-api-access-qk6jj\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.914909 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-log-httpd\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.914925 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-scripts\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.931226 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-678bf6fff8-vd4c4" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": dial tcp 10.217.0.210:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 19 17:07:53 crc kubenswrapper[4792]: I0319 17:07:53.938911 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-678bf6fff8-vd4c4" podUID="03771a31-1a24-4d59-a92a-31f89f9bc89d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.016569 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.016639 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.017662 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-config-data\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.017800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-run-httpd\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.017823 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk6jj\" (UniqueName: \"kubernetes.io/projected/ccfb9f12-84fa-412b-900d-d254cf4303dc-kube-api-access-qk6jj\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.017867 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-log-httpd\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.017884 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-scripts\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.019405 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-run-httpd\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.020308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-log-httpd\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.025436 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-config-data\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.029624 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.034600 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.037976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-scripts\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.049220 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk6jj\" (UniqueName: \"kubernetes.io/projected/ccfb9f12-84fa-412b-900d-d254cf4303dc-kube-api-access-qk6jj\") pod \"ceilometer-0\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.127130 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.317178 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r6f9z" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.424709 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9gsh\" (UniqueName: \"kubernetes.io/projected/cdaaa799-71ff-429b-86fe-bbe4e903984f-kube-api-access-q9gsh\") pod \"cdaaa799-71ff-429b-86fe-bbe4e903984f\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.424981 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-config-data\") pod \"cdaaa799-71ff-429b-86fe-bbe4e903984f\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.425029 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-combined-ca-bundle\") pod \"cdaaa799-71ff-429b-86fe-bbe4e903984f\" (UID: \"cdaaa799-71ff-429b-86fe-bbe4e903984f\") " Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.442438 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdaaa799-71ff-429b-86fe-bbe4e903984f-kube-api-access-q9gsh" (OuterVolumeSpecName: "kube-api-access-q9gsh") pod "cdaaa799-71ff-429b-86fe-bbe4e903984f" (UID: "cdaaa799-71ff-429b-86fe-bbe4e903984f"). InnerVolumeSpecName "kube-api-access-q9gsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.459879 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdaaa799-71ff-429b-86fe-bbe4e903984f" (UID: "cdaaa799-71ff-429b-86fe-bbe4e903984f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.522439 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-config-data" (OuterVolumeSpecName: "config-data") pod "cdaaa799-71ff-429b-86fe-bbe4e903984f" (UID: "cdaaa799-71ff-429b-86fe-bbe4e903984f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.527459 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.527518 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdaaa799-71ff-429b-86fe-bbe4e903984f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.527537 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9gsh\" (UniqueName: \"kubernetes.io/projected/cdaaa799-71ff-429b-86fe-bbe4e903984f-kube-api-access-q9gsh\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:54 crc kubenswrapper[4792]: W0319 17:07:54.633779 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccfb9f12_84fa_412b_900d_d254cf4303dc.slice/crio-7e7d582451f80630905ee4eb7b78481cfd5c70301b9ea94fb997c9e4789d55b1 WatchSource:0}: Error finding container 7e7d582451f80630905ee4eb7b78481cfd5c70301b9ea94fb997c9e4789d55b1: Status 404 returned error can't find the container with id 7e7d582451f80630905ee4eb7b78481cfd5c70301b9ea94fb997c9e4789d55b1 Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.648437 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r6f9z" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.648439 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r6f9z" event={"ID":"cdaaa799-71ff-429b-86fe-bbe4e903984f","Type":"ContainerDied","Data":"2762ee6b355c644a9ed5e6ada87b53c714e25a53fb2e3576f7cbe06f8c62df76"} Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.649961 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2762ee6b355c644a9ed5e6ada87b53c714e25a53fb2e3576f7cbe06f8c62df76" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.650394 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.718689 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-65c9569ddf-24zz6" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 19 17:07:54 crc kubenswrapper[4792]: I0319 17:07:54.991647 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.041364 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-combined-ca-bundle\") pod \"ef634102-a683-498b-ad98-61d470f7fefa\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.041525 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef634102-a683-498b-ad98-61d470f7fefa-etc-machine-id\") pod \"ef634102-a683-498b-ad98-61d470f7fefa\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.041609 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-scripts\") pod \"ef634102-a683-498b-ad98-61d470f7fefa\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.041647 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-db-sync-config-data\") pod \"ef634102-a683-498b-ad98-61d470f7fefa\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.041698 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjwlt\" (UniqueName: \"kubernetes.io/projected/ef634102-a683-498b-ad98-61d470f7fefa-kube-api-access-zjwlt\") pod \"ef634102-a683-498b-ad98-61d470f7fefa\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.041751 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-config-data\") pod \"ef634102-a683-498b-ad98-61d470f7fefa\" (UID: \"ef634102-a683-498b-ad98-61d470f7fefa\") " Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.042887 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef634102-a683-498b-ad98-61d470f7fefa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ef634102-a683-498b-ad98-61d470f7fefa" (UID: "ef634102-a683-498b-ad98-61d470f7fefa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.049581 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ef634102-a683-498b-ad98-61d470f7fefa" (UID: "ef634102-a683-498b-ad98-61d470f7fefa"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.050270 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-scripts" (OuterVolumeSpecName: "scripts") pod "ef634102-a683-498b-ad98-61d470f7fefa" (UID: "ef634102-a683-498b-ad98-61d470f7fefa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.050989 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef634102-a683-498b-ad98-61d470f7fefa-kube-api-access-zjwlt" (OuterVolumeSpecName: "kube-api-access-zjwlt") pod "ef634102-a683-498b-ad98-61d470f7fefa" (UID: "ef634102-a683-498b-ad98-61d470f7fefa"). InnerVolumeSpecName "kube-api-access-zjwlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.084254 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef634102-a683-498b-ad98-61d470f7fefa" (UID: "ef634102-a683-498b-ad98-61d470f7fefa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.120970 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-config-data" (OuterVolumeSpecName: "config-data") pod "ef634102-a683-498b-ad98-61d470f7fefa" (UID: "ef634102-a683-498b-ad98-61d470f7fefa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.145629 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.145665 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef634102-a683-498b-ad98-61d470f7fefa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.145675 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.145685 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.145695 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjwlt\" (UniqueName: \"kubernetes.io/projected/ef634102-a683-498b-ad98-61d470f7fefa-kube-api-access-zjwlt\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.145706 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef634102-a683-498b-ad98-61d470f7fefa-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.535012 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-65c9569ddf-24zz6" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.201:9696/\": read tcp 10.217.0.2:34432->10.217.0.201:9696: read: connection reset by peer" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.659642 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerStarted","Data":"bdbf99c4bb4f94f84d18f638be74736db4b076104dab51e068f7301fa70656aa"} Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.659917 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerStarted","Data":"7e7d582451f80630905ee4eb7b78481cfd5c70301b9ea94fb997c9e4789d55b1"} Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.662783 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6ftwc" event={"ID":"ef634102-a683-498b-ad98-61d470f7fefa","Type":"ContainerDied","Data":"15fa15635294600f48f347bbe7c8897e1bcdf7653fbfc9be1758cf9f2500fe16"} Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.662829 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15fa15635294600f48f347bbe7c8897e1bcdf7653fbfc9be1758cf9f2500fe16" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.662907 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6ftwc" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.667405 4792 generic.go:334] "Generic (PLEG): container finished" podID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerID="531d789062b377ce6f3ad30c627bcb3aee6ab58f003ac30243da3993acc8b218" exitCode=0 Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.667432 4792 generic.go:334] "Generic (PLEG): container finished" podID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerID="e186b30476926b21afbd7781efe2b3fce255f4665739e11011daa22ce0f54925" exitCode=0 Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.667450 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65c9569ddf-24zz6" event={"ID":"ab62ad1f-f033-470f-ba9b-e75ace44e30e","Type":"ContainerDied","Data":"531d789062b377ce6f3ad30c627bcb3aee6ab58f003ac30243da3993acc8b218"} Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.667473 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65c9569ddf-24zz6" event={"ID":"ab62ad1f-f033-470f-ba9b-e75ace44e30e","Type":"ContainerDied","Data":"e186b30476926b21afbd7781efe2b3fce255f4665739e11011daa22ce0f54925"} Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.760520 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373" path="/var/lib/kubelet/pods/3c380bc3-72a1-4c70-b3b0-6f3ee2ecc373/volumes" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.993237 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:07:55 crc kubenswrapper[4792]: E0319 17:07:55.994249 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef634102-a683-498b-ad98-61d470f7fefa" containerName="cinder-db-sync" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.994261 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef634102-a683-498b-ad98-61d470f7fefa" containerName="cinder-db-sync" Mar 19 17:07:55 crc kubenswrapper[4792]: E0319 17:07:55.994281 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdaaa799-71ff-429b-86fe-bbe4e903984f" containerName="heat-db-sync" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.994289 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdaaa799-71ff-429b-86fe-bbe4e903984f" containerName="heat-db-sync" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.994664 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdaaa799-71ff-429b-86fe-bbe4e903984f" containerName="heat-db-sync" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.994684 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef634102-a683-498b-ad98-61d470f7fefa" containerName="cinder-db-sync" Mar 19 17:07:55 crc kubenswrapper[4792]: I0319 17:07:55.996402 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.000181 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-g5zgx" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.000356 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.000774 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.002657 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.029394 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.110395 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dd0be369-d704-43ad-851a-c7e24798a150-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.110526 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.110568 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.110615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.110781 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-scripts\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.111135 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nps8\" (UniqueName: \"kubernetes.io/projected/dd0be369-d704-43ad-851a-c7e24798a150-kube-api-access-5nps8\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.134943 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vd5bm"] Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.137865 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.201594 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vd5bm"] Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227000 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227176 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nps8\" (UniqueName: \"kubernetes.io/projected/dd0be369-d704-43ad-851a-c7e24798a150-kube-api-access-5nps8\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227285 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227408 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dd0be369-d704-43ad-851a-c7e24798a150-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227451 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227558 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227626 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227642 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdphd\" (UniqueName: \"kubernetes.io/projected/6ac9ea57-0d86-4c21-9c31-ee9487da0942-kube-api-access-xdphd\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227680 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227699 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227772 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-scripts\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.227800 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-config\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.230132 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.236949 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.238762 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dd0be369-d704-43ad-851a-c7e24798a150-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.239498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.239742 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.242034 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.246439 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.248823 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-scripts\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.250785 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.254031 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nps8\" (UniqueName: \"kubernetes.io/projected/dd0be369-d704-43ad-851a-c7e24798a150-kube-api-access-5nps8\") pod \"cinder-scheduler-0\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339181 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-config\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339249 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339288 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339322 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/466e2209-1931-4b44-a60b-f18124ede6ee-kube-api-access-99zf8\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339339 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466e2209-1931-4b44-a60b-f18124ede6ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339427 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339449 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339479 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/466e2209-1931-4b44-a60b-f18124ede6ee-logs\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339521 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdphd\" (UniqueName: \"kubernetes.io/projected/6ac9ea57-0d86-4c21-9c31-ee9487da0942-kube-api-access-xdphd\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.339539 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-scripts\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.340663 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.340668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-config\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.341260 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.341307 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.341783 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.360320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdphd\" (UniqueName: \"kubernetes.io/projected/6ac9ea57-0d86-4c21-9c31-ee9487da0942-kube-api-access-xdphd\") pod \"dnsmasq-dns-6bb4fc677f-vd5bm\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.366942 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.368760 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.442920 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-combined-ca-bundle\") pod \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.443238 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-public-tls-certs\") pod \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.443379 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-internal-tls-certs\") pod \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.443427 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-httpd-config\") pod \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.443461 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv6cm\" (UniqueName: \"kubernetes.io/projected/ab62ad1f-f033-470f-ba9b-e75ace44e30e-kube-api-access-gv6cm\") pod \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.443534 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-config\") pod \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.443559 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-ovndb-tls-certs\") pod \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\" (UID: \"ab62ad1f-f033-470f-ba9b-e75ace44e30e\") " Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.443938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-scripts\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.444622 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.444690 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/466e2209-1931-4b44-a60b-f18124ede6ee-kube-api-access-99zf8\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.444755 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.444776 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466e2209-1931-4b44-a60b-f18124ede6ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.444825 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.444882 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/466e2209-1931-4b44-a60b-f18124ede6ee-logs\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.448140 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/466e2209-1931-4b44-a60b-f18124ede6ee-logs\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.448976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466e2209-1931-4b44-a60b-f18124ede6ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.451162 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab62ad1f-f033-470f-ba9b-e75ace44e30e-kube-api-access-gv6cm" (OuterVolumeSpecName: "kube-api-access-gv6cm") pod "ab62ad1f-f033-470f-ba9b-e75ace44e30e" (UID: "ab62ad1f-f033-470f-ba9b-e75ace44e30e"). InnerVolumeSpecName "kube-api-access-gv6cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.454717 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.466643 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.467396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-scripts\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.467571 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.473686 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ab62ad1f-f033-470f-ba9b-e75ace44e30e" (UID: "ab62ad1f-f033-470f-ba9b-e75ace44e30e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.474428 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/466e2209-1931-4b44-a60b-f18124ede6ee-kube-api-access-99zf8\") pod \"cinder-api-0\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.483208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.546542 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.546809 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv6cm\" (UniqueName: \"kubernetes.io/projected/ab62ad1f-f033-470f-ba9b-e75ace44e30e-kube-api-access-gv6cm\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.601947 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ab62ad1f-f033-470f-ba9b-e75ace44e30e" (UID: "ab62ad1f-f033-470f-ba9b-e75ace44e30e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.638278 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ab62ad1f-f033-470f-ba9b-e75ace44e30e" (UID: "ab62ad1f-f033-470f-ba9b-e75ace44e30e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.649856 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.649889 4792 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.659658 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab62ad1f-f033-470f-ba9b-e75ace44e30e" (UID: "ab62ad1f-f033-470f-ba9b-e75ace44e30e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.663764 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ab62ad1f-f033-470f-ba9b-e75ace44e30e" (UID: "ab62ad1f-f033-470f-ba9b-e75ace44e30e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.670329 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-config" (OuterVolumeSpecName: "config") pod "ab62ad1f-f033-470f-ba9b-e75ace44e30e" (UID: "ab62ad1f-f033-470f-ba9b-e75ace44e30e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.693803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerStarted","Data":"3362c0a60992eb4278ec30c79cadcf17969283e0a2509c8936579a727b041d25"} Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.695041 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.710115 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65c9569ddf-24zz6" event={"ID":"ab62ad1f-f033-470f-ba9b-e75ace44e30e","Type":"ContainerDied","Data":"e679f05810fb1fcadfd37900793b1d4355ec0e9c0fc088f12a2c45110463f2bd"} Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.710198 4792 scope.go:117] "RemoveContainer" containerID="531d789062b377ce6f3ad30c627bcb3aee6ab58f003ac30243da3993acc8b218" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.710438 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65c9569ddf-24zz6" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.740987 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:07:56 crc kubenswrapper[4792]: E0319 17:07:56.747482 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.752275 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.752318 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.752328 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab62ad1f-f033-470f-ba9b-e75ace44e30e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.772990 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65c9569ddf-24zz6"] Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.789481 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-65c9569ddf-24zz6"] Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.820948 4792 scope.go:117] "RemoveContainer" containerID="e186b30476926b21afbd7781efe2b3fce255f4665739e11011daa22ce0f54925" Mar 19 17:07:56 crc kubenswrapper[4792]: I0319 17:07:56.939144 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vd5bm"] Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.131067 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:07:57 crc kubenswrapper[4792]: W0319 17:07:57.141122 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd0be369_d704_43ad_851a_c7e24798a150.slice/crio-a84fef8f841754ba6cf3967558098f10d4c4c27b1f3247530cb83d0b2ac4b914 WatchSource:0}: Error finding container a84fef8f841754ba6cf3967558098f10d4c4c27b1f3247530cb83d0b2ac4b914: Status 404 returned error can't find the container with id a84fef8f841754ba6cf3967558098f10d4c4c27b1f3247530cb83d0b2ac4b914 Mar 19 17:07:57 crc kubenswrapper[4792]: E0319 17:07:57.255894 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" image="quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" Mar 19 17:07:57 crc kubenswrapper[4792]: E0319 17:07:57.256072 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-scheduler,Image:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n685h7ch58h548h5fch666hcch59h9h655h5cch545h5f5h596h5cbh57h94h559h5b6hd4hfbh5bch675h664hc4h58dhf7h7h65h595h59chc8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cinder-scheduler-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nps8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*42407,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-scheduler-0_openstack(dd0be369-d704-43ad-851a-c7e24798a150): ErrImagePull: initializing source docker://quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" logger="UnhandledError" Mar 19 17:07:57 crc kubenswrapper[4792]: E0319 17:07:57.262529 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-scheduler\" with ErrImagePull: \"initializing source docker://quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)\", failed to \"StartContainer\" for \"probe\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified\\\"\"]" pod="openstack/cinder-scheduler-0" podUID="dd0be369-d704-43ad-851a-c7e24798a150" Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.405171 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:07:57 crc kubenswrapper[4792]: E0319 17:07:57.607109 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ac9ea57_0d86_4c21_9c31_ee9487da0942.slice/crio-conmon-b9dcfb4c7a807e84d74e35aa0197aa75de1ab3e5548b49187da33dca2160c272.scope\": RecentStats: unable to find data in memory cache]" Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.787269 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" path="/var/lib/kubelet/pods/ab62ad1f-f033-470f-ba9b-e75ace44e30e/volumes" Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.788190 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerStarted","Data":"32b40ff99bd7a1480b078105d8eb835b39aaed9b71326dda966ec4499aa3e2ca"} Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.788229 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"466e2209-1931-4b44-a60b-f18124ede6ee","Type":"ContainerStarted","Data":"c490ddb6c0ccf74ec238d5b6ae4992480e98affcca471125804fda43b14ded10"} Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.806178 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" containerID="b9dcfb4c7a807e84d74e35aa0197aa75de1ab3e5548b49187da33dca2160c272" exitCode=0 Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.807018 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" event={"ID":"6ac9ea57-0d86-4c21-9c31-ee9487da0942","Type":"ContainerDied","Data":"b9dcfb4c7a807e84d74e35aa0197aa75de1ab3e5548b49187da33dca2160c272"} Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.807091 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" event={"ID":"6ac9ea57-0d86-4c21-9c31-ee9487da0942","Type":"ContainerStarted","Data":"1e2d82ba96fa33ad81ece60bb2976ff5da025c7b0b6b3152b9b0d851bb230bf6"} Mar 19 17:07:57 crc kubenswrapper[4792]: I0319 17:07:57.816638 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dd0be369-d704-43ad-851a-c7e24798a150","Type":"ContainerStarted","Data":"a84fef8f841754ba6cf3967558098f10d4c4c27b1f3247530cb83d0b2ac4b914"} Mar 19 17:07:57 crc kubenswrapper[4792]: E0319 17:07:57.826882 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified\\\"\", failed to \"StartContainer\" for \"probe\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified\\\"\"]" pod="openstack/cinder-scheduler-0" podUID="dd0be369-d704-43ad-851a-c7e24798a150" Mar 19 17:07:58 crc kubenswrapper[4792]: I0319 17:07:58.696723 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:07:58 crc kubenswrapper[4792]: I0319 17:07:58.830982 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"466e2209-1931-4b44-a60b-f18124ede6ee","Type":"ContainerStarted","Data":"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b"} Mar 19 17:07:58 crc kubenswrapper[4792]: I0319 17:07:58.833454 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" event={"ID":"6ac9ea57-0d86-4c21-9c31-ee9487da0942","Type":"ContainerStarted","Data":"0dd6ea90c2d8cf814a163b0dfc0c67e0114e15ee2b1e225aa0f1b156e57993a4"} Mar 19 17:07:58 crc kubenswrapper[4792]: E0319 17:07:58.840072 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified\\\"\", failed to \"StartContainer\" for \"probe\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified\\\"\"]" pod="openstack/cinder-scheduler-0" podUID="dd0be369-d704-43ad-851a-c7e24798a150" Mar 19 17:07:58 crc kubenswrapper[4792]: I0319 17:07:58.891109 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" podStartSLOduration=2.891090283 podStartE2EDuration="2.891090283s" podCreationTimestamp="2026-03-19 17:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:58.876293737 +0000 UTC m=+1642.022351267" watchObservedRunningTime="2026-03-19 17:07:58.891090283 +0000 UTC m=+1642.037147823" Mar 19 17:07:59 crc kubenswrapper[4792]: I0319 17:07:59.844956 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"466e2209-1931-4b44-a60b-f18124ede6ee","Type":"ContainerStarted","Data":"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e"} Mar 19 17:07:59 crc kubenswrapper[4792]: I0319 17:07:59.845499 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 19 17:07:59 crc kubenswrapper[4792]: I0319 17:07:59.845078 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" containerName="cinder-api" containerID="cri-o://b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e" gracePeriod=30 Mar 19 17:07:59 crc kubenswrapper[4792]: I0319 17:07:59.845054 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" containerName="cinder-api-log" containerID="cri-o://6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b" gracePeriod=30 Mar 19 17:07:59 crc kubenswrapper[4792]: I0319 17:07:59.847822 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerStarted","Data":"6722e62c7746ba2911bc2c90f0a321f49a4ec2e65b514f6314334aea642eaee0"} Mar 19 17:07:59 crc kubenswrapper[4792]: I0319 17:07:59.848802 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:07:59 crc kubenswrapper[4792]: I0319 17:07:59.881522 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.881505062 podStartE2EDuration="3.881505062s" podCreationTimestamp="2026-03-19 17:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:07:59.865920644 +0000 UTC m=+1643.011978214" watchObservedRunningTime="2026-03-19 17:07:59.881505062 +0000 UTC m=+1643.027562602" Mar 19 17:07:59 crc kubenswrapper[4792]: I0319 17:07:59.899321 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.245401293 podStartE2EDuration="6.89929944s" podCreationTimestamp="2026-03-19 17:07:53 +0000 UTC" firstStartedPulling="2026-03-19 17:07:54.638629066 +0000 UTC m=+1637.784686606" lastFinishedPulling="2026-03-19 17:07:59.292527213 +0000 UTC m=+1642.438584753" observedRunningTime="2026-03-19 17:07:59.888769231 +0000 UTC m=+1643.034826771" watchObservedRunningTime="2026-03-19 17:07:59.89929944 +0000 UTC m=+1643.045356980" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.135748 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565668-qtbgd"] Mar 19 17:08:00 crc kubenswrapper[4792]: E0319 17:08:00.136487 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-httpd" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.136506 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-httpd" Mar 19 17:08:00 crc kubenswrapper[4792]: E0319 17:08:00.136551 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-api" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.136557 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-api" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.136744 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-api" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.136791 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab62ad1f-f033-470f-ba9b-e75ace44e30e" containerName="neutron-httpd" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.137558 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.143379 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.143690 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.144908 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.148562 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565668-qtbgd"] Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.156904 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk4gx\" (UniqueName: \"kubernetes.io/projected/ea79aa54-fee6-4c52-a337-2a7e3a3da9ca-kube-api-access-bk4gx\") pod \"auto-csr-approver-29565668-qtbgd\" (UID: \"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca\") " pod="openshift-infra/auto-csr-approver-29565668-qtbgd" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.258120 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk4gx\" (UniqueName: \"kubernetes.io/projected/ea79aa54-fee6-4c52-a337-2a7e3a3da9ca-kube-api-access-bk4gx\") pod \"auto-csr-approver-29565668-qtbgd\" (UID: \"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca\") " pod="openshift-infra/auto-csr-approver-29565668-qtbgd" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.276478 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk4gx\" (UniqueName: \"kubernetes.io/projected/ea79aa54-fee6-4c52-a337-2a7e3a3da9ca-kube-api-access-bk4gx\") pod \"auto-csr-approver-29565668-qtbgd\" (UID: \"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca\") " pod="openshift-infra/auto-csr-approver-29565668-qtbgd" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.568449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.776216 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.866202 4792 generic.go:334] "Generic (PLEG): container finished" podID="466e2209-1931-4b44-a60b-f18124ede6ee" containerID="b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e" exitCode=0 Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.866238 4792 generic.go:334] "Generic (PLEG): container finished" podID="466e2209-1931-4b44-a60b-f18124ede6ee" containerID="6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b" exitCode=143 Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.867552 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.868193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"466e2209-1931-4b44-a60b-f18124ede6ee","Type":"ContainerDied","Data":"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e"} Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.868227 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"466e2209-1931-4b44-a60b-f18124ede6ee","Type":"ContainerDied","Data":"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b"} Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.868242 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"466e2209-1931-4b44-a60b-f18124ede6ee","Type":"ContainerDied","Data":"c490ddb6c0ccf74ec238d5b6ae4992480e98affcca471125804fda43b14ded10"} Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.868258 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.868892 4792 scope.go:117] "RemoveContainer" containerID="b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.881753 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data-custom\") pod \"466e2209-1931-4b44-a60b-f18124ede6ee\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.881828 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data\") pod \"466e2209-1931-4b44-a60b-f18124ede6ee\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.881885 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-scripts\") pod \"466e2209-1931-4b44-a60b-f18124ede6ee\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.881932 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466e2209-1931-4b44-a60b-f18124ede6ee-etc-machine-id\") pod \"466e2209-1931-4b44-a60b-f18124ede6ee\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.882070 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/466e2209-1931-4b44-a60b-f18124ede6ee-kube-api-access-99zf8\") pod \"466e2209-1931-4b44-a60b-f18124ede6ee\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.882189 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-combined-ca-bundle\") pod \"466e2209-1931-4b44-a60b-f18124ede6ee\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.883967 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/466e2209-1931-4b44-a60b-f18124ede6ee-logs\") pod \"466e2209-1931-4b44-a60b-f18124ede6ee\" (UID: \"466e2209-1931-4b44-a60b-f18124ede6ee\") " Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.884801 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466e2209-1931-4b44-a60b-f18124ede6ee-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "466e2209-1931-4b44-a60b-f18124ede6ee" (UID: "466e2209-1931-4b44-a60b-f18124ede6ee"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.885140 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/466e2209-1931-4b44-a60b-f18124ede6ee-logs" (OuterVolumeSpecName: "logs") pod "466e2209-1931-4b44-a60b-f18124ede6ee" (UID: "466e2209-1931-4b44-a60b-f18124ede6ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.886962 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/466e2209-1931-4b44-a60b-f18124ede6ee-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.886993 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466e2209-1931-4b44-a60b-f18124ede6ee-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.896015 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "466e2209-1931-4b44-a60b-f18124ede6ee" (UID: "466e2209-1931-4b44-a60b-f18124ede6ee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.899108 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-scripts" (OuterVolumeSpecName: "scripts") pod "466e2209-1931-4b44-a60b-f18124ede6ee" (UID: "466e2209-1931-4b44-a60b-f18124ede6ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.901790 4792 scope.go:117] "RemoveContainer" containerID="6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.910246 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466e2209-1931-4b44-a60b-f18124ede6ee-kube-api-access-99zf8" (OuterVolumeSpecName: "kube-api-access-99zf8") pod "466e2209-1931-4b44-a60b-f18124ede6ee" (UID: "466e2209-1931-4b44-a60b-f18124ede6ee"). InnerVolumeSpecName "kube-api-access-99zf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.935297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "466e2209-1931-4b44-a60b-f18124ede6ee" (UID: "466e2209-1931-4b44-a60b-f18124ede6ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.956044 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data" (OuterVolumeSpecName: "config-data") pod "466e2209-1931-4b44-a60b-f18124ede6ee" (UID: "466e2209-1931-4b44-a60b-f18124ede6ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.991066 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.991103 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.991118 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.991132 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/466e2209-1931-4b44-a60b-f18124ede6ee-kube-api-access-99zf8\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:00 crc kubenswrapper[4792]: I0319 17:08:00.991147 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466e2209-1931-4b44-a60b-f18124ede6ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.032439 4792 scope.go:117] "RemoveContainer" containerID="b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e" Mar 19 17:08:01 crc kubenswrapper[4792]: E0319 17:08:01.032961 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e\": container with ID starting with b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e not found: ID does not exist" containerID="b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.032989 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e"} err="failed to get container status \"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e\": rpc error: code = NotFound desc = could not find container \"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e\": container with ID starting with b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e not found: ID does not exist" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.033012 4792 scope.go:117] "RemoveContainer" containerID="6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b" Mar 19 17:08:01 crc kubenswrapper[4792]: E0319 17:08:01.033336 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b\": container with ID starting with 6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b not found: ID does not exist" containerID="6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.033383 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b"} err="failed to get container status \"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b\": rpc error: code = NotFound desc = could not find container \"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b\": container with ID starting with 6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b not found: ID does not exist" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.033417 4792 scope.go:117] "RemoveContainer" containerID="b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.033706 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e"} err="failed to get container status \"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e\": rpc error: code = NotFound desc = could not find container \"b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e\": container with ID starting with b9c2c920bbef84b18362fa524bc91db0b8f7fc7b85d1d4840dd7986c2b3dfc7e not found: ID does not exist" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.033749 4792 scope.go:117] "RemoveContainer" containerID="6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.034264 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b"} err="failed to get container status \"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b\": rpc error: code = NotFound desc = could not find container \"6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b\": container with ID starting with 6dd1c4cd8a1f3d3534a83bcd2c23d93e16bcd46c51bb38b91827d1a5ac31589b not found: ID does not exist" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.079384 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-68456dfd85-xsh6s" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.189750 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565668-qtbgd"] Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.228138 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.260791 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.274891 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:08:01 crc kubenswrapper[4792]: E0319 17:08:01.275423 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" containerName="cinder-api-log" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.275445 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" containerName="cinder-api-log" Mar 19 17:08:01 crc kubenswrapper[4792]: E0319 17:08:01.275473 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" containerName="cinder-api" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.275481 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" containerName="cinder-api" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.277602 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" containerName="cinder-api-log" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.277677 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" containerName="cinder-api" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.279024 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.283066 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.283299 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.283539 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.291065 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.298791 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-config-data-custom\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.298879 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-scripts\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.298969 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.299033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.299053 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c0efe25-7ec1-4e80-80c8-812972764179-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.299108 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rgtd\" (UniqueName: \"kubernetes.io/projected/9c0efe25-7ec1-4e80-80c8-812972764179-kube-api-access-8rgtd\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.299136 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.299167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0efe25-7ec1-4e80-80c8-812972764179-logs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.299211 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-config-data\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.400922 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0efe25-7ec1-4e80-80c8-812972764179-logs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.401506 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-config-data\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.401622 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-config-data-custom\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.401665 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-scripts\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.401702 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.401738 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.401763 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c0efe25-7ec1-4e80-80c8-812972764179-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.401815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rgtd\" (UniqueName: \"kubernetes.io/projected/9c0efe25-7ec1-4e80-80c8-812972764179-kube-api-access-8rgtd\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.401864 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.402309 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c0efe25-7ec1-4e80-80c8-812972764179-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.402606 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0efe25-7ec1-4e80-80c8-812972764179-logs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.412522 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.416498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.416736 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-scripts\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.418601 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-config-data-custom\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.418850 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.424454 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0efe25-7ec1-4e80-80c8-812972764179-config-data\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.425073 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rgtd\" (UniqueName: \"kubernetes.io/projected/9c0efe25-7ec1-4e80-80c8-812972764179-kube-api-access-8rgtd\") pod \"cinder-api-0\" (UID: \"9c0efe25-7ec1-4e80-80c8-812972764179\") " pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.670913 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.758827 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466e2209-1931-4b44-a60b-f18124ede6ee" path="/var/lib/kubelet/pods/466e2209-1931-4b44-a60b-f18124ede6ee/volumes" Mar 19 17:08:01 crc kubenswrapper[4792]: I0319 17:08:01.884101 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" event={"ID":"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca","Type":"ContainerStarted","Data":"203bda2c79868f59f5bcb14c9509fcfe50afcad1a5412bc7cf8123cf89e62aec"} Mar 19 17:08:02 crc kubenswrapper[4792]: I0319 17:08:02.194072 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 19 17:08:02 crc kubenswrapper[4792]: I0319 17:08:02.893294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" event={"ID":"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca","Type":"ContainerStarted","Data":"dce6f83b932bc60629e2d547e2854e0cc5e89382ce9c281ed71a00d66632a143"} Mar 19 17:08:02 crc kubenswrapper[4792]: I0319 17:08:02.894586 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9c0efe25-7ec1-4e80-80c8-812972764179","Type":"ContainerStarted","Data":"b8b90ca2abf8db676aed86ae9a43dfcaa57d371d0be5f5b4756ac3b0ebd89428"} Mar 19 17:08:02 crc kubenswrapper[4792]: I0319 17:08:02.919060 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" podStartSLOduration=1.8993269050000001 podStartE2EDuration="2.919038997s" podCreationTimestamp="2026-03-19 17:08:00 +0000 UTC" firstStartedPulling="2026-03-19 17:08:01.179196896 +0000 UTC m=+1644.325254436" lastFinishedPulling="2026-03-19 17:08:02.198908988 +0000 UTC m=+1645.344966528" observedRunningTime="2026-03-19 17:08:02.908620711 +0000 UTC m=+1646.054678251" watchObservedRunningTime="2026-03-19 17:08:02.919038997 +0000 UTC m=+1646.065096537" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.690113 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.775688 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-combined-ca-bundle\") pod \"aa59a063-31ae-41e0-86a5-020f60d0113a\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.776499 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbzhj\" (UniqueName: \"kubernetes.io/projected/aa59a063-31ae-41e0-86a5-020f60d0113a-kube-api-access-gbzhj\") pod \"aa59a063-31ae-41e0-86a5-020f60d0113a\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.776866 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data-custom\") pod \"aa59a063-31ae-41e0-86a5-020f60d0113a\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.777104 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa59a063-31ae-41e0-86a5-020f60d0113a-logs\") pod \"aa59a063-31ae-41e0-86a5-020f60d0113a\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.777228 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data\") pod \"aa59a063-31ae-41e0-86a5-020f60d0113a\" (UID: \"aa59a063-31ae-41e0-86a5-020f60d0113a\") " Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.779681 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa59a063-31ae-41e0-86a5-020f60d0113a-logs" (OuterVolumeSpecName: "logs") pod "aa59a063-31ae-41e0-86a5-020f60d0113a" (UID: "aa59a063-31ae-41e0-86a5-020f60d0113a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.782434 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa59a063-31ae-41e0-86a5-020f60d0113a-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.784416 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa59a063-31ae-41e0-86a5-020f60d0113a-kube-api-access-gbzhj" (OuterVolumeSpecName: "kube-api-access-gbzhj") pod "aa59a063-31ae-41e0-86a5-020f60d0113a" (UID: "aa59a063-31ae-41e0-86a5-020f60d0113a"). InnerVolumeSpecName "kube-api-access-gbzhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.798935 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aa59a063-31ae-41e0-86a5-020f60d0113a" (UID: "aa59a063-31ae-41e0-86a5-020f60d0113a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.842963 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa59a063-31ae-41e0-86a5-020f60d0113a" (UID: "aa59a063-31ae-41e0-86a5-020f60d0113a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.884546 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.884899 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbzhj\" (UniqueName: \"kubernetes.io/projected/aa59a063-31ae-41e0-86a5-020f60d0113a-kube-api-access-gbzhj\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.885032 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.901859 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data" (OuterVolumeSpecName: "config-data") pod "aa59a063-31ae-41e0-86a5-020f60d0113a" (UID: "aa59a063-31ae-41e0-86a5-020f60d0113a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.909447 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9c0efe25-7ec1-4e80-80c8-812972764179","Type":"ContainerStarted","Data":"e915abac0b383683c022b31070331fb28db85126ea590ca61d804d279d2534d1"} Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.909496 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9c0efe25-7ec1-4e80-80c8-812972764179","Type":"ContainerStarted","Data":"0677aae445832efe37e6e1693df2cb9468aa09ab3b1f042b886d8b65a9ebf38e"} Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.909713 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.918217 4792 generic.go:334] "Generic (PLEG): container finished" podID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerID="5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145" exitCode=137 Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.918292 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5789c5b8cd-gst5f" event={"ID":"aa59a063-31ae-41e0-86a5-020f60d0113a","Type":"ContainerDied","Data":"5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145"} Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.918365 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5789c5b8cd-gst5f" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.918379 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5789c5b8cd-gst5f" event={"ID":"aa59a063-31ae-41e0-86a5-020f60d0113a","Type":"ContainerDied","Data":"1fb4a38487621056f1fff23c57626657667234767f033d33dd1d900f19c6a46b"} Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.918421 4792 scope.go:117] "RemoveContainer" containerID="5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.960532 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.960512187 podStartE2EDuration="2.960512187s" podCreationTimestamp="2026-03-19 17:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:03.94350838 +0000 UTC m=+1647.089565920" watchObservedRunningTime="2026-03-19 17:08:03.960512187 +0000 UTC m=+1647.106569727" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.969782 4792 scope.go:117] "RemoveContainer" containerID="1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.979589 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5789c5b8cd-gst5f"] Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.990047 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa59a063-31ae-41e0-86a5-020f60d0113a-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:03 crc kubenswrapper[4792]: I0319 17:08:03.992812 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5789c5b8cd-gst5f"] Mar 19 17:08:04 crc kubenswrapper[4792]: I0319 17:08:04.001586 4792 scope.go:117] "RemoveContainer" containerID="5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145" Mar 19 17:08:04 crc kubenswrapper[4792]: E0319 17:08:04.003375 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145\": container with ID starting with 5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145 not found: ID does not exist" containerID="5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145" Mar 19 17:08:04 crc kubenswrapper[4792]: I0319 17:08:04.003489 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145"} err="failed to get container status \"5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145\": rpc error: code = NotFound desc = could not find container \"5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145\": container with ID starting with 5a9d950875fc8052ac734f23c39684d82dd5b11c88fdf6d49e1b57db90873145 not found: ID does not exist" Mar 19 17:08:04 crc kubenswrapper[4792]: I0319 17:08:04.003574 4792 scope.go:117] "RemoveContainer" containerID="1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53" Mar 19 17:08:04 crc kubenswrapper[4792]: E0319 17:08:04.004057 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53\": container with ID starting with 1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53 not found: ID does not exist" containerID="1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53" Mar 19 17:08:04 crc kubenswrapper[4792]: I0319 17:08:04.004095 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53"} err="failed to get container status \"1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53\": rpc error: code = NotFound desc = could not find container \"1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53\": container with ID starting with 1c1fb2658d9012fafe06d3b4114cdc081a4d7cbe48034363d920604b16882a53 not found: ID does not exist" Mar 19 17:08:04 crc kubenswrapper[4792]: I0319 17:08:04.932437 4792 generic.go:334] "Generic (PLEG): container finished" podID="ea79aa54-fee6-4c52-a337-2a7e3a3da9ca" containerID="dce6f83b932bc60629e2d547e2854e0cc5e89382ce9c281ed71a00d66632a143" exitCode=0 Mar 19 17:08:04 crc kubenswrapper[4792]: I0319 17:08:04.932805 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" event={"ID":"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca","Type":"ContainerDied","Data":"dce6f83b932bc60629e2d547e2854e0cc5e89382ce9c281ed71a00d66632a143"} Mar 19 17:08:05 crc kubenswrapper[4792]: I0319 17:08:05.758895 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" path="/var/lib/kubelet/pods/aa59a063-31ae-41e0-86a5-020f60d0113a/volumes" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.258800 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 19 17:08:06 crc kubenswrapper[4792]: E0319 17:08:06.259459 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api-log" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.259477 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api-log" Mar 19 17:08:06 crc kubenswrapper[4792]: E0319 17:08:06.259497 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.259504 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.259795 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.259822 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api-log" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.260829 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.263452 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.263756 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.265487 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kfwk8" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.282391 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.349005 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config-secret\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.349057 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkkcz\" (UniqueName: \"kubernetes.io/projected/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-kube-api-access-lkkcz\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.349357 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.349426 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.370802 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.451912 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config-secret\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.451982 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkkcz\" (UniqueName: \"kubernetes.io/projected/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-kube-api-access-lkkcz\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.452093 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.452125 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.453309 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-vz7kl"] Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.453550 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" podUID="35fedaab-ff86-4533-933f-76c7143d9614" containerName="dnsmasq-dns" containerID="cri-o://d1dd9690b40bcc47a0a6dbe35ec19adee2afdcdeaaf7595f56d550aa61e1784a" gracePeriod=10 Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.454684 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.459931 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.470000 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.471832 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config-secret\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.505389 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkkcz\" (UniqueName: \"kubernetes.io/projected/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-kube-api-access-lkkcz\") pod \"openstackclient\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.553204 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk4gx\" (UniqueName: \"kubernetes.io/projected/ea79aa54-fee6-4c52-a337-2a7e3a3da9ca-kube-api-access-bk4gx\") pod \"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca\" (UID: \"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca\") " Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.559349 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea79aa54-fee6-4c52-a337-2a7e3a3da9ca-kube-api-access-bk4gx" (OuterVolumeSpecName: "kube-api-access-bk4gx") pod "ea79aa54-fee6-4c52-a337-2a7e3a3da9ca" (UID: "ea79aa54-fee6-4c52-a337-2a7e3a3da9ca"). InnerVolumeSpecName "kube-api-access-bk4gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.593660 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.653541 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.656656 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk4gx\" (UniqueName: \"kubernetes.io/projected/ea79aa54-fee6-4c52-a337-2a7e3a3da9ca-kube-api-access-bk4gx\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.667683 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.708247 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 19 17:08:06 crc kubenswrapper[4792]: E0319 17:08:06.708919 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea79aa54-fee6-4c52-a337-2a7e3a3da9ca" containerName="oc" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.708946 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea79aa54-fee6-4c52-a337-2a7e3a3da9ca" containerName="oc" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.709304 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea79aa54-fee6-4c52-a337-2a7e3a3da9ca" containerName="oc" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.710368 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.733734 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.759443 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjst\" (UniqueName: \"kubernetes.io/projected/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-kube-api-access-2sjst\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.759529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.759608 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-openstack-config\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.759648 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: E0319 17:08:06.769884 4792 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 17:08:06 crc kubenswrapper[4792]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_bdee470d-2d27-4e67-8fdc-98eb7ef8ddba_0(19985f5bf15549cbdf75176d20e066ef7b7803b01313d4131901b022beeb4db1): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"19985f5bf15549cbdf75176d20e066ef7b7803b01313d4131901b022beeb4db1" Netns:"/var/run/netns/ca0d9214-e101-46f4-8d00-840fb3eeee7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=19985f5bf15549cbdf75176d20e066ef7b7803b01313d4131901b022beeb4db1;K8S_POD_UID=bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba]: expected pod UID "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" but got "d7885af7-09a3-4ea4-b59f-2de96f42fd0b" from Kube API Mar 19 17:08:06 crc kubenswrapper[4792]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 17:08:06 crc kubenswrapper[4792]: > Mar 19 17:08:06 crc kubenswrapper[4792]: E0319 17:08:06.770152 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 17:08:06 crc kubenswrapper[4792]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_bdee470d-2d27-4e67-8fdc-98eb7ef8ddba_0(19985f5bf15549cbdf75176d20e066ef7b7803b01313d4131901b022beeb4db1): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"19985f5bf15549cbdf75176d20e066ef7b7803b01313d4131901b022beeb4db1" Netns:"/var/run/netns/ca0d9214-e101-46f4-8d00-840fb3eeee7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=19985f5bf15549cbdf75176d20e066ef7b7803b01313d4131901b022beeb4db1;K8S_POD_UID=bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba]: expected pod UID "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" but got "d7885af7-09a3-4ea4-b59f-2de96f42fd0b" from Kube API Mar 19 17:08:06 crc kubenswrapper[4792]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 17:08:06 crc kubenswrapper[4792]: > pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.863780 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.864000 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjst\" (UniqueName: \"kubernetes.io/projected/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-kube-api-access-2sjst\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.864035 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.864071 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-openstack-config\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.864899 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-openstack-config\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.872308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.875553 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.884181 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjst\" (UniqueName: \"kubernetes.io/projected/d7885af7-09a3-4ea4-b59f-2de96f42fd0b-kube-api-access-2sjst\") pod \"openstackclient\" (UID: \"d7885af7-09a3-4ea4-b59f-2de96f42fd0b\") " pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.959300 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.960188 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565668-qtbgd" event={"ID":"ea79aa54-fee6-4c52-a337-2a7e3a3da9ca","Type":"ContainerDied","Data":"203bda2c79868f59f5bcb14c9509fcfe50afcad1a5412bc7cf8123cf89e62aec"} Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.960237 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203bda2c79868f59f5bcb14c9509fcfe50afcad1a5412bc7cf8123cf89e62aec" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.968390 4792 generic.go:334] "Generic (PLEG): container finished" podID="35fedaab-ff86-4533-933f-76c7143d9614" containerID="d1dd9690b40bcc47a0a6dbe35ec19adee2afdcdeaaf7595f56d550aa61e1784a" exitCode=0 Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.968479 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.968892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" event={"ID":"35fedaab-ff86-4533-933f-76c7143d9614","Type":"ContainerDied","Data":"d1dd9690b40bcc47a0a6dbe35ec19adee2afdcdeaaf7595f56d550aa61e1784a"} Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.972200 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" podUID="d7885af7-09a3-4ea4-b59f-2de96f42fd0b" Mar 19 17:08:06 crc kubenswrapper[4792]: I0319 17:08:06.980269 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.016706 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565662-bdzvt"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.029552 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565662-bdzvt"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.030453 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.042870 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067111 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km498\" (UniqueName: \"kubernetes.io/projected/35fedaab-ff86-4533-933f-76c7143d9614-kube-api-access-km498\") pod \"35fedaab-ff86-4533-933f-76c7143d9614\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067238 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkkcz\" (UniqueName: \"kubernetes.io/projected/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-kube-api-access-lkkcz\") pod \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067280 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-swift-storage-0\") pod \"35fedaab-ff86-4533-933f-76c7143d9614\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067319 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-config\") pod \"35fedaab-ff86-4533-933f-76c7143d9614\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067382 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-svc\") pod \"35fedaab-ff86-4533-933f-76c7143d9614\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067409 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-nb\") pod \"35fedaab-ff86-4533-933f-76c7143d9614\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-sb\") pod \"35fedaab-ff86-4533-933f-76c7143d9614\" (UID: \"35fedaab-ff86-4533-933f-76c7143d9614\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067482 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-combined-ca-bundle\") pod \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067551 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config\") pod \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.067607 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config-secret\") pod \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\" (UID: \"bdee470d-2d27-4e67-8fdc-98eb7ef8ddba\") " Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.068651 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" (UID: "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.070034 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.072090 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" (UID: "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.080103 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" (UID: "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.080257 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-kube-api-access-lkkcz" (OuterVolumeSpecName: "kube-api-access-lkkcz") pod "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" (UID: "bdee470d-2d27-4e67-8fdc-98eb7ef8ddba"). InnerVolumeSpecName "kube-api-access-lkkcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.083519 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35fedaab-ff86-4533-933f-76c7143d9614-kube-api-access-km498" (OuterVolumeSpecName: "kube-api-access-km498") pod "35fedaab-ff86-4533-933f-76c7143d9614" (UID: "35fedaab-ff86-4533-933f-76c7143d9614"). InnerVolumeSpecName "kube-api-access-km498". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.140459 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35fedaab-ff86-4533-933f-76c7143d9614" (UID: "35fedaab-ff86-4533-933f-76c7143d9614"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.144416 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "35fedaab-ff86-4533-933f-76c7143d9614" (UID: "35fedaab-ff86-4533-933f-76c7143d9614"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.153175 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-config" (OuterVolumeSpecName: "config") pod "35fedaab-ff86-4533-933f-76c7143d9614" (UID: "35fedaab-ff86-4533-933f-76c7143d9614"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.153180 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "35fedaab-ff86-4533-933f-76c7143d9614" (UID: "35fedaab-ff86-4533-933f-76c7143d9614"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.166752 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35fedaab-ff86-4533-933f-76c7143d9614" (UID: "35fedaab-ff86-4533-933f-76c7143d9614"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172030 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkkcz\" (UniqueName: \"kubernetes.io/projected/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-kube-api-access-lkkcz\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172067 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172080 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172096 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172105 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172113 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fedaab-ff86-4533-933f-76c7143d9614-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172123 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172131 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.172139 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km498\" (UniqueName: \"kubernetes.io/projected/35fedaab-ff86-4533-933f-76c7143d9614-kube-api-access-km498\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.447050 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-84cdd6c86c-5thrd"] Mar 19 17:08:07 crc kubenswrapper[4792]: E0319 17:08:07.448242 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35fedaab-ff86-4533-933f-76c7143d9614" containerName="dnsmasq-dns" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.448260 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="35fedaab-ff86-4533-933f-76c7143d9614" containerName="dnsmasq-dns" Mar 19 17:08:07 crc kubenswrapper[4792]: E0319 17:08:07.448289 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35fedaab-ff86-4533-933f-76c7143d9614" containerName="init" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.448294 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="35fedaab-ff86-4533-933f-76c7143d9614" containerName="init" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.448548 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="35fedaab-ff86-4533-933f-76c7143d9614" containerName="dnsmasq-dns" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.449738 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.459701 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84cdd6c86c-5thrd"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.469939 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.470133 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.470239 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.479872 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-public-tls-certs\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.479945 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-internal-tls-certs\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.480073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-run-httpd\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.480110 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-combined-ca-bundle\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.480193 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-etc-swift\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.480271 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-log-httpd\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.480299 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxklv\" (UniqueName: \"kubernetes.io/projected/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-kube-api-access-lxklv\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.480326 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-config-data\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.573558 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wqttw"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.576567 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.581802 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-etc-swift\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.581889 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-log-httpd\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.581910 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxklv\" (UniqueName: \"kubernetes.io/projected/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-kube-api-access-lxklv\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.581930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-config-data\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.581980 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-public-tls-certs\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.582018 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-internal-tls-certs\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.582104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-run-httpd\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.582130 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-combined-ca-bundle\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.583918 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-log-httpd\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.585210 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-run-httpd\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.589457 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-combined-ca-bundle\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.590646 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-public-tls-certs\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.595161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-etc-swift\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.595177 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-config-data\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.602632 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxklv\" (UniqueName: \"kubernetes.io/projected/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-kube-api-access-lxklv\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.604407 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4-internal-tls-certs\") pod \"swift-proxy-84cdd6c86c-5thrd\" (UID: \"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4\") " pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.621958 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wqttw"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.687899 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-operator-scripts\") pod \"nova-api-db-create-wqttw\" (UID: \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\") " pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.690958 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqkg\" (UniqueName: \"kubernetes.io/projected/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-kube-api-access-tsqkg\") pod \"nova-api-db-create-wqttw\" (UID: \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\") " pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.695650 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.733227 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-m4xgs"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.736828 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.793561 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57162945-9d65-4f62-b049-d8e61a06c508-operator-scripts\") pod \"nova-cell0-db-create-m4xgs\" (UID: \"57162945-9d65-4f62-b049-d8e61a06c508\") " pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.793681 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmkc\" (UniqueName: \"kubernetes.io/projected/57162945-9d65-4f62-b049-d8e61a06c508-kube-api-access-zvmkc\") pod \"nova-cell0-db-create-m4xgs\" (UID: \"57162945-9d65-4f62-b049-d8e61a06c508\") " pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.793725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-operator-scripts\") pod \"nova-api-db-create-wqttw\" (UID: \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\") " pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.794015 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsqkg\" (UniqueName: \"kubernetes.io/projected/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-kube-api-access-tsqkg\") pod \"nova-api-db-create-wqttw\" (UID: \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\") " pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.796671 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-operator-scripts\") pod \"nova-api-db-create-wqttw\" (UID: \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\") " pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.797463 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02835fca-719f-4bb9-8124-624a8fc2c074" path="/var/lib/kubelet/pods/02835fca-719f-4bb9-8124-624a8fc2c074/volumes" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.804512 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" path="/var/lib/kubelet/pods/bdee470d-2d27-4e67-8fdc-98eb7ef8ddba/volumes" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.805750 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-m4xgs"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.806710 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.820068 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsqkg\" (UniqueName: \"kubernetes.io/projected/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-kube-api-access-tsqkg\") pod \"nova-api-db-create-wqttw\" (UID: \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\") " pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.821419 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-819d-account-create-update-nw88m"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.822893 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.830367 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.854295 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-819d-account-create-update-nw88m"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.896363 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57162945-9d65-4f62-b049-d8e61a06c508-operator-scripts\") pod \"nova-cell0-db-create-m4xgs\" (UID: \"57162945-9d65-4f62-b049-d8e61a06c508\") " pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.896426 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4w7\" (UniqueName: \"kubernetes.io/projected/d4019769-bbd1-4dea-b732-315d331cb7c7-kube-api-access-ft4w7\") pod \"nova-api-819d-account-create-update-nw88m\" (UID: \"d4019769-bbd1-4dea-b732-315d331cb7c7\") " pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.896760 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvmkc\" (UniqueName: \"kubernetes.io/projected/57162945-9d65-4f62-b049-d8e61a06c508-kube-api-access-zvmkc\") pod \"nova-cell0-db-create-m4xgs\" (UID: \"57162945-9d65-4f62-b049-d8e61a06c508\") " pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.896917 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4019769-bbd1-4dea-b732-315d331cb7c7-operator-scripts\") pod \"nova-api-819d-account-create-update-nw88m\" (UID: \"d4019769-bbd1-4dea-b732-315d331cb7c7\") " pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.898562 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57162945-9d65-4f62-b049-d8e61a06c508-operator-scripts\") pod \"nova-cell0-db-create-m4xgs\" (UID: \"57162945-9d65-4f62-b049-d8e61a06c508\") " pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.925221 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvmkc\" (UniqueName: \"kubernetes.io/projected/57162945-9d65-4f62-b049-d8e61a06c508-kube-api-access-zvmkc\") pod \"nova-cell0-db-create-m4xgs\" (UID: \"57162945-9d65-4f62-b049-d8e61a06c508\") " pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.960405 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-5k7gr"] Mar 19 17:08:07 crc kubenswrapper[4792]: I0319 17:08:07.975209 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.002945 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5k7gr"] Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.004466 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4019769-bbd1-4dea-b732-315d331cb7c7-operator-scripts\") pod \"nova-api-819d-account-create-update-nw88m\" (UID: \"d4019769-bbd1-4dea-b732-315d331cb7c7\") " pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.004654 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cltfq\" (UniqueName: \"kubernetes.io/projected/a2b6a98e-4345-443a-b896-a4b73cda3c34-kube-api-access-cltfq\") pod \"nova-cell1-db-create-5k7gr\" (UID: \"a2b6a98e-4345-443a-b896-a4b73cda3c34\") " pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.004740 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft4w7\" (UniqueName: \"kubernetes.io/projected/d4019769-bbd1-4dea-b732-315d331cb7c7-kube-api-access-ft4w7\") pod \"nova-api-819d-account-create-update-nw88m\" (UID: \"d4019769-bbd1-4dea-b732-315d331cb7c7\") " pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.004824 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b6a98e-4345-443a-b896-a4b73cda3c34-operator-scripts\") pod \"nova-cell1-db-create-5k7gr\" (UID: \"a2b6a98e-4345-443a-b896-a4b73cda3c34\") " pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.014025 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d7885af7-09a3-4ea4-b59f-2de96f42fd0b","Type":"ContainerStarted","Data":"204dd36a425ce6e1e1f720ea422dfbd88dc5381464388c092632521be48eb319"} Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.014535 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4019769-bbd1-4dea-b732-315d331cb7c7-operator-scripts\") pod \"nova-api-819d-account-create-update-nw88m\" (UID: \"d4019769-bbd1-4dea-b732-315d331cb7c7\") " pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.019904 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.020802 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.021543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-vz7kl" event={"ID":"35fedaab-ff86-4533-933f-76c7143d9614","Type":"ContainerDied","Data":"352bbcf81ea503f3086cfea649a4a88844226f4c9784d70710faec7593534242"} Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.021582 4792 scope.go:117] "RemoveContainer" containerID="d1dd9690b40bcc47a0a6dbe35ec19adee2afdcdeaaf7595f56d550aa61e1784a" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.023898 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-edb3-account-create-update-wv7rz"] Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.027771 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.030833 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.045367 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-edb3-account-create-update-wv7rz"] Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.056601 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bdee470d-2d27-4e67-8fdc-98eb7ef8ddba" podUID="d7885af7-09a3-4ea4-b59f-2de96f42fd0b" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.064160 4792 scope.go:117] "RemoveContainer" containerID="037681b5cbe975348b0b08b9ba09a4857810b45b4e703f5610825637c7d58455" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.071332 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft4w7\" (UniqueName: \"kubernetes.io/projected/d4019769-bbd1-4dea-b732-315d331cb7c7-kube-api-access-ft4w7\") pod \"nova-api-819d-account-create-update-nw88m\" (UID: \"d4019769-bbd1-4dea-b732-315d331cb7c7\") " pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.075078 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.082969 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-vz7kl"] Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.093650 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.094479 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-vz7kl"] Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.105940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4q4b\" (UniqueName: \"kubernetes.io/projected/62f7cad5-612c-4946-8596-c7e5837465a1-kube-api-access-g4q4b\") pod \"nova-cell0-edb3-account-create-update-wv7rz\" (UID: \"62f7cad5-612c-4946-8596-c7e5837465a1\") " pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.106003 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b6a98e-4345-443a-b896-a4b73cda3c34-operator-scripts\") pod \"nova-cell1-db-create-5k7gr\" (UID: \"a2b6a98e-4345-443a-b896-a4b73cda3c34\") " pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.106127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7cad5-612c-4946-8596-c7e5837465a1-operator-scripts\") pod \"nova-cell0-edb3-account-create-update-wv7rz\" (UID: \"62f7cad5-612c-4946-8596-c7e5837465a1\") " pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.106184 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cltfq\" (UniqueName: \"kubernetes.io/projected/a2b6a98e-4345-443a-b896-a4b73cda3c34-kube-api-access-cltfq\") pod \"nova-cell1-db-create-5k7gr\" (UID: \"a2b6a98e-4345-443a-b896-a4b73cda3c34\") " pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.106974 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b6a98e-4345-443a-b896-a4b73cda3c34-operator-scripts\") pod \"nova-cell1-db-create-5k7gr\" (UID: \"a2b6a98e-4345-443a-b896-a4b73cda3c34\") " pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.123037 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cltfq\" (UniqueName: \"kubernetes.io/projected/a2b6a98e-4345-443a-b896-a4b73cda3c34-kube-api-access-cltfq\") pod \"nova-cell1-db-create-5k7gr\" (UID: \"a2b6a98e-4345-443a-b896-a4b73cda3c34\") " pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.208880 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7cad5-612c-4946-8596-c7e5837465a1-operator-scripts\") pod \"nova-cell0-edb3-account-create-update-wv7rz\" (UID: \"62f7cad5-612c-4946-8596-c7e5837465a1\") " pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.210035 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4q4b\" (UniqueName: \"kubernetes.io/projected/62f7cad5-612c-4946-8596-c7e5837465a1-kube-api-access-g4q4b\") pod \"nova-cell0-edb3-account-create-update-wv7rz\" (UID: \"62f7cad5-612c-4946-8596-c7e5837465a1\") " pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.211934 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7cad5-612c-4946-8596-c7e5837465a1-operator-scripts\") pod \"nova-cell0-edb3-account-create-update-wv7rz\" (UID: \"62f7cad5-612c-4946-8596-c7e5837465a1\") " pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.237303 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-82d7-account-create-update-mzkgh"] Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.240342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.244645 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.247559 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4q4b\" (UniqueName: \"kubernetes.io/projected/62f7cad5-612c-4946-8596-c7e5837465a1-kube-api-access-g4q4b\") pod \"nova-cell0-edb3-account-create-update-wv7rz\" (UID: \"62f7cad5-612c-4946-8596-c7e5837465a1\") " pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.248115 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.260913 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-82d7-account-create-update-mzkgh"] Mar 19 17:08:08 crc kubenswrapper[4792]: E0319 17:08:08.316021 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35fedaab_ff86_4533_933f_76c7143d9614.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdee470d_2d27_4e67_8fdc_98eb7ef8ddba.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.346029 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-operator-scripts\") pod \"nova-cell1-82d7-account-create-update-mzkgh\" (UID: \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\") " pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.346201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whtrn\" (UniqueName: \"kubernetes.io/projected/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-kube-api-access-whtrn\") pod \"nova-cell1-82d7-account-create-update-mzkgh\" (UID: \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\") " pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.347015 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.367790 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.449716 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-operator-scripts\") pod \"nova-cell1-82d7-account-create-update-mzkgh\" (UID: \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\") " pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.450122 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whtrn\" (UniqueName: \"kubernetes.io/projected/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-kube-api-access-whtrn\") pod \"nova-cell1-82d7-account-create-update-mzkgh\" (UID: \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\") " pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.450737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-operator-scripts\") pod \"nova-cell1-82d7-account-create-update-mzkgh\" (UID: \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\") " pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.473291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whtrn\" (UniqueName: \"kubernetes.io/projected/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-kube-api-access-whtrn\") pod \"nova-cell1-82d7-account-create-update-mzkgh\" (UID: \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\") " pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.498276 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5789c5b8cd-gst5f" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.498353 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5789c5b8cd-gst5f" podUID="aa59a063-31ae-41e0-86a5-020f60d0113a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.576768 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.583583 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84cdd6c86c-5thrd"] Mar 19 17:08:08 crc kubenswrapper[4792]: W0319 17:08:08.603998 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5039c3a0_47c3_4b1e_94d4_61bd47a4f3f4.slice/crio-25cd4f590fa38a94cc33f2e66062b8aee2e98e91739be5fa13a47ff653009937 WatchSource:0}: Error finding container 25cd4f590fa38a94cc33f2e66062b8aee2e98e91739be5fa13a47ff653009937: Status 404 returned error can't find the container with id 25cd4f590fa38a94cc33f2e66062b8aee2e98e91739be5fa13a47ff653009937 Mar 19 17:08:08 crc kubenswrapper[4792]: E0319 17:08:08.668081 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading blob sha256:46bb501571939286d5e6dc7b04b8aa97f362b222218b77201f405a56d0e03ae3: fetching blob: received unexpected HTTP status: 502 Bad Gateway" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Mar 19 17:08:08 crc kubenswrapper[4792]: E0319 17:08:08.668261 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfch58bh64bh55h8dh575h5f8h568h689h67chf6h58fh5cfh599h5dbh75h648h666h5dbh67dh7h64bh675h575hb6h699h689h7fhf4hb6h656h55fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sjst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(d7885af7-09a3-4ea4-b59f-2de96f42fd0b): ErrImagePull: reading blob sha256:46bb501571939286d5e6dc7b04b8aa97f362b222218b77201f405a56d0e03ae3: fetching blob: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Mar 19 17:08:08 crc kubenswrapper[4792]: E0319 17:08:08.671046 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"reading blob sha256:46bb501571939286d5e6dc7b04b8aa97f362b222218b77201f405a56d0e03ae3: fetching blob: received unexpected HTTP status: 502 Bad Gateway\"" pod="openstack/openstackclient" podUID="d7885af7-09a3-4ea4-b59f-2de96f42fd0b" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.744106 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:08:08 crc kubenswrapper[4792]: E0319 17:08:08.744364 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:08:08 crc kubenswrapper[4792]: I0319 17:08:08.939297 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wqttw"] Mar 19 17:08:09 crc kubenswrapper[4792]: I0319 17:08:09.108957 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-m4xgs"] Mar 19 17:08:09 crc kubenswrapper[4792]: I0319 17:08:09.141086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wqttw" event={"ID":"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf","Type":"ContainerStarted","Data":"70c85d228a01e3d69ec4a05809131122c442f90152b8455620d4e66055751e7e"} Mar 19 17:08:09 crc kubenswrapper[4792]: I0319 17:08:09.159444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84cdd6c86c-5thrd" event={"ID":"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4","Type":"ContainerStarted","Data":"25cd4f590fa38a94cc33f2e66062b8aee2e98e91739be5fa13a47ff653009937"} Mar 19 17:08:09 crc kubenswrapper[4792]: E0319 17:08:09.160411 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="d7885af7-09a3-4ea4-b59f-2de96f42fd0b" Mar 19 17:08:09 crc kubenswrapper[4792]: I0319 17:08:09.336532 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-edb3-account-create-update-wv7rz"] Mar 19 17:08:09 crc kubenswrapper[4792]: I0319 17:08:09.644216 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-82d7-account-create-update-mzkgh"] Mar 19 17:08:09 crc kubenswrapper[4792]: W0319 17:08:09.682614 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4019769_bbd1_4dea_b732_315d331cb7c7.slice/crio-43282b1c3834823e8caa22bbec23a070e1f87551b0ffc2a50688ea54a463d515 WatchSource:0}: Error finding container 43282b1c3834823e8caa22bbec23a070e1f87551b0ffc2a50688ea54a463d515: Status 404 returned error can't find the container with id 43282b1c3834823e8caa22bbec23a070e1f87551b0ffc2a50688ea54a463d515 Mar 19 17:08:09 crc kubenswrapper[4792]: I0319 17:08:09.695658 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-819d-account-create-update-nw88m"] Mar 19 17:08:09 crc kubenswrapper[4792]: I0319 17:08:09.710493 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5k7gr"] Mar 19 17:08:09 crc kubenswrapper[4792]: I0319 17:08:09.759265 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35fedaab-ff86-4533-933f-76c7143d9614" path="/var/lib/kubelet/pods/35fedaab-ff86-4533-933f-76c7143d9614/volumes" Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.048639 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.062943 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75d8cc585d-x4dns" Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.160473 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7c679c588-pcfbf"] Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.161108 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7c679c588-pcfbf" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerName="placement-log" containerID="cri-o://f16e9c510929366875024e6d0538c492fd1f793c410c58461e058867de95a88b" gracePeriod=30 Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.162164 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7c679c588-pcfbf" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerName="placement-api" containerID="cri-o://00bf7acbdf98d22fa5bdbb646f387a0b0040a58f9197e80e83d17e17987bcb99" gracePeriod=30 Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.194138 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-819d-account-create-update-nw88m" event={"ID":"d4019769-bbd1-4dea-b732-315d331cb7c7","Type":"ContainerStarted","Data":"6d727b4b525c1e280aac89e31eff969fe6948396c520e8c92da3256d31ca8560"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.194190 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-819d-account-create-update-nw88m" event={"ID":"d4019769-bbd1-4dea-b732-315d331cb7c7","Type":"ContainerStarted","Data":"43282b1c3834823e8caa22bbec23a070e1f87551b0ffc2a50688ea54a463d515"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.198772 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5k7gr" event={"ID":"a2b6a98e-4345-443a-b896-a4b73cda3c34","Type":"ContainerStarted","Data":"1ad8ae26ca5987196abf49447799b0988f8c599ccbbaa2a164bac2a595ecc728"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.198822 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5k7gr" event={"ID":"a2b6a98e-4345-443a-b896-a4b73cda3c34","Type":"ContainerStarted","Data":"ca2f87b4ab7bb203cad42e0259353f57bd27ff756f33bd5a7ea6c35bbe040c92"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.204713 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84cdd6c86c-5thrd" event={"ID":"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4","Type":"ContainerStarted","Data":"1bd7a9a9d3112eb393bb267565d2e7691d67edf06609a50b5a7aea8343f60721"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.204756 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84cdd6c86c-5thrd" event={"ID":"5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4","Type":"ContainerStarted","Data":"64a0a356119ace5a23e8cfa5246923f8c51e638f5ac42c11fdc1c22585869b11"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.205996 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.216752 4792 generic.go:334] "Generic (PLEG): container finished" podID="62f7cad5-612c-4946-8596-c7e5837465a1" containerID="841baf1780d2a5eae61314bcb50a0569a901cba30c27e2fd47e09001f7ff2265" exitCode=0 Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.216829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" event={"ID":"62f7cad5-612c-4946-8596-c7e5837465a1","Type":"ContainerDied","Data":"841baf1780d2a5eae61314bcb50a0569a901cba30c27e2fd47e09001f7ff2265"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.216880 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" event={"ID":"62f7cad5-612c-4946-8596-c7e5837465a1","Type":"ContainerStarted","Data":"4acf420be9733061a3b46a8ebb27a3bd22dcc0828c115f8aa72900a7b5f71b62"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.223280 4792 generic.go:334] "Generic (PLEG): container finished" podID="15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf" containerID="58b852e0a02edff14382c2c1cb90b77ac07614a4caf9a8f97efe3f35a2adb4ef" exitCode=0 Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.223368 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wqttw" event={"ID":"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf","Type":"ContainerDied","Data":"58b852e0a02edff14382c2c1cb90b77ac07614a4caf9a8f97efe3f35a2adb4ef"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.231377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" event={"ID":"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e","Type":"ContainerStarted","Data":"bec299b309a1f6c92fd1747ec186a165f4ff17b80a449cc087da337d44397e8f"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.231433 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" event={"ID":"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e","Type":"ContainerStarted","Data":"fa9ca56d792e41d9110f0a9788e626781b4d18ac151658eb81fdef6c18939eec"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.233567 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-819d-account-create-update-nw88m" podStartSLOduration=3.233549861 podStartE2EDuration="3.233549861s" podCreationTimestamp="2026-03-19 17:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:10.223149126 +0000 UTC m=+1653.369206666" watchObservedRunningTime="2026-03-19 17:08:10.233549861 +0000 UTC m=+1653.379607401" Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.246366 4792 generic.go:334] "Generic (PLEG): container finished" podID="57162945-9d65-4f62-b049-d8e61a06c508" containerID="da3d53b83b5a6366b01c11d304031a2e82f0abbf868ec006fe4899dd73995944" exitCode=0 Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.247740 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m4xgs" event={"ID":"57162945-9d65-4f62-b049-d8e61a06c508","Type":"ContainerDied","Data":"da3d53b83b5a6366b01c11d304031a2e82f0abbf868ec006fe4899dd73995944"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.247778 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m4xgs" event={"ID":"57162945-9d65-4f62-b049-d8e61a06c508","Type":"ContainerStarted","Data":"b05e2cb3664319765c6bcb177f30cee00cff60f57f6dfd8d59b5c2d58526c6e0"} Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.294337 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-84cdd6c86c-5thrd" podStartSLOduration=3.29431784 podStartE2EDuration="3.29431784s" podCreationTimestamp="2026-03-19 17:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:10.288176301 +0000 UTC m=+1653.434233851" watchObservedRunningTime="2026-03-19 17:08:10.29431784 +0000 UTC m=+1653.440375380" Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.320279 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-5k7gr" podStartSLOduration=3.320251981 podStartE2EDuration="3.320251981s" podCreationTimestamp="2026-03-19 17:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:10.305216988 +0000 UTC m=+1653.451274528" watchObservedRunningTime="2026-03-19 17:08:10.320251981 +0000 UTC m=+1653.466309531" Mar 19 17:08:10 crc kubenswrapper[4792]: I0319 17:08:10.450711 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" podStartSLOduration=2.450688082 podStartE2EDuration="2.450688082s" podCreationTimestamp="2026-03-19 17:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:10.373577065 +0000 UTC m=+1653.519634625" watchObservedRunningTime="2026-03-19 17:08:10.450688082 +0000 UTC m=+1653.596745622" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.022382 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.022959 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="ceilometer-central-agent" containerID="cri-o://bdbf99c4bb4f94f84d18f638be74736db4b076104dab51e068f7301fa70656aa" gracePeriod=30 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.023747 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="proxy-httpd" containerID="cri-o://6722e62c7746ba2911bc2c90f0a321f49a4ec2e65b514f6314334aea642eaee0" gracePeriod=30 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.023796 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="sg-core" containerID="cri-o://32b40ff99bd7a1480b078105d8eb835b39aaed9b71326dda966ec4499aa3e2ca" gracePeriod=30 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.023852 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="ceilometer-notification-agent" containerID="cri-o://3362c0a60992eb4278ec30c79cadcf17969283e0a2509c8936579a727b041d25" gracePeriod=30 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.033451 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.214:3000/\": EOF" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.227117 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-d5c5d8dc8-z7j2w"] Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.228685 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.238708 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.244354 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-dfllr" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.244602 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.257366 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d5c5d8dc8-z7j2w"] Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.267484 4792 generic.go:334] "Generic (PLEG): container finished" podID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerID="f16e9c510929366875024e6d0538c492fd1f793c410c58461e058867de95a88b" exitCode=143 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.267767 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c679c588-pcfbf" event={"ID":"69a561ed-717c-43e0-82b3-42bb63bb68b5","Type":"ContainerDied","Data":"f16e9c510929366875024e6d0538c492fd1f793c410c58461e058867de95a88b"} Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.274770 4792 generic.go:334] "Generic (PLEG): container finished" podID="1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e" containerID="bec299b309a1f6c92fd1747ec186a165f4ff17b80a449cc087da337d44397e8f" exitCode=0 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.274893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" event={"ID":"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e","Type":"ContainerDied","Data":"bec299b309a1f6c92fd1747ec186a165f4ff17b80a449cc087da337d44397e8f"} Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.301755 4792 generic.go:334] "Generic (PLEG): container finished" podID="d4019769-bbd1-4dea-b732-315d331cb7c7" containerID="6d727b4b525c1e280aac89e31eff969fe6948396c520e8c92da3256d31ca8560" exitCode=0 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.301870 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-819d-account-create-update-nw88m" event={"ID":"d4019769-bbd1-4dea-b732-315d331cb7c7","Type":"ContainerDied","Data":"6d727b4b525c1e280aac89e31eff969fe6948396c520e8c92da3256d31ca8560"} Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.316213 4792 generic.go:334] "Generic (PLEG): container finished" podID="a2b6a98e-4345-443a-b896-a4b73cda3c34" containerID="1ad8ae26ca5987196abf49447799b0988f8c599ccbbaa2a164bac2a595ecc728" exitCode=0 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.316350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5k7gr" event={"ID":"a2b6a98e-4345-443a-b896-a4b73cda3c34","Type":"ContainerDied","Data":"1ad8ae26ca5987196abf49447799b0988f8c599ccbbaa2a164bac2a595ecc728"} Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.322815 4792 generic.go:334] "Generic (PLEG): container finished" podID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerID="32b40ff99bd7a1480b078105d8eb835b39aaed9b71326dda966ec4499aa3e2ca" exitCode=2 Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.325541 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerDied","Data":"32b40ff99bd7a1480b078105d8eb835b39aaed9b71326dda966ec4499aa3e2ca"} Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.327671 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.348184 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-f2c6n"] Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.350314 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.381976 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-combined-ca-bundle\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.382210 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data-custom\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.382380 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.382476 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fff6h\" (UniqueName: \"kubernetes.io/projected/db1742b5-7b52-49d1-8dba-f9c27446efb2-kube-api-access-fff6h\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.435506 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-f2c6n"] Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484303 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-svc\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484349 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data-custom\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484404 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-config\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484453 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484524 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fff6h\" (UniqueName: \"kubernetes.io/projected/db1742b5-7b52-49d1-8dba-f9c27446efb2-kube-api-access-fff6h\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8h5c\" (UniqueName: \"kubernetes.io/projected/29b28abf-0997-4ee6-a514-eb15f9955657-kube-api-access-w8h5c\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484601 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-combined-ca-bundle\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484631 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.484682 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.503884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-combined-ca-bundle\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.504782 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.522064 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data-custom\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.529239 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5f7fc4678c-j72pt"] Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.531316 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.532516 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fff6h\" (UniqueName: \"kubernetes.io/projected/db1742b5-7b52-49d1-8dba-f9c27446efb2-kube-api-access-fff6h\") pod \"heat-engine-d5c5d8dc8-z7j2w\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.534210 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.551914 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5f7fc4678c-j72pt"] Mar 19 17:08:11 crc kubenswrapper[4792]: I0319 17:08:11.564572 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.594601 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.594665 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.599535 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-fd575b5d8-m4xkm"] Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.594742 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-svc\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.600590 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-config\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.600713 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.600789 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8h5c\" (UniqueName: \"kubernetes.io/projected/29b28abf-0997-4ee6-a514-eb15f9955657-kube-api-access-w8h5c\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.602035 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.602326 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.602680 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-config\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.603343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.607049 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.614386 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-svc\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.631139 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.657298 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fd575b5d8-m4xkm"] Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.664424 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8h5c\" (UniqueName: \"kubernetes.io/projected/29b28abf-0997-4ee6-a514-eb15f9955657-kube-api-access-w8h5c\") pod \"dnsmasq-dns-7d978555f9-f2c6n\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.703037 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data-custom\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.703114 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-combined-ca-bundle\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.703136 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.703160 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9cw2\" (UniqueName: \"kubernetes.io/projected/cd0a7861-6627-4968-9221-f62a57b41288-kube-api-access-t9cw2\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.703240 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzq2k\" (UniqueName: \"kubernetes.io/projected/0b17f3ca-da31-48bc-b5cf-e41676d6960a-kube-api-access-vzq2k\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.703368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-combined-ca-bundle\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.703402 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.703431 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data-custom\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.808573 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.808822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9cw2\" (UniqueName: \"kubernetes.io/projected/cd0a7861-6627-4968-9221-f62a57b41288-kube-api-access-t9cw2\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.808886 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzq2k\" (UniqueName: \"kubernetes.io/projected/0b17f3ca-da31-48bc-b5cf-e41676d6960a-kube-api-access-vzq2k\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.809006 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-combined-ca-bundle\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.809041 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.809073 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data-custom\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.809101 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data-custom\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.809146 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-combined-ca-bundle\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.814505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-combined-ca-bundle\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.824222 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data-custom\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.824510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data-custom\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.829081 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-combined-ca-bundle\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.834489 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.835590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.839500 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.846440 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9cw2\" (UniqueName: \"kubernetes.io/projected/cd0a7861-6627-4968-9221-f62a57b41288-kube-api-access-t9cw2\") pod \"heat-api-5f7fc4678c-j72pt\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.849737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzq2k\" (UniqueName: \"kubernetes.io/projected/0b17f3ca-da31-48bc-b5cf-e41676d6960a-kube-api-access-vzq2k\") pod \"heat-cfnapi-fd575b5d8-m4xkm\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.874954 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:11.890657 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.112636 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.219174 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvmkc\" (UniqueName: \"kubernetes.io/projected/57162945-9d65-4f62-b049-d8e61a06c508-kube-api-access-zvmkc\") pod \"57162945-9d65-4f62-b049-d8e61a06c508\" (UID: \"57162945-9d65-4f62-b049-d8e61a06c508\") " Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.220301 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57162945-9d65-4f62-b049-d8e61a06c508-operator-scripts\") pod \"57162945-9d65-4f62-b049-d8e61a06c508\" (UID: \"57162945-9d65-4f62-b049-d8e61a06c508\") " Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.223499 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57162945-9d65-4f62-b049-d8e61a06c508-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57162945-9d65-4f62-b049-d8e61a06c508" (UID: "57162945-9d65-4f62-b049-d8e61a06c508"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.224763 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57162945-9d65-4f62-b049-d8e61a06c508-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.242105 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57162945-9d65-4f62-b049-d8e61a06c508-kube-api-access-zvmkc" (OuterVolumeSpecName: "kube-api-access-zvmkc") pod "57162945-9d65-4f62-b049-d8e61a06c508" (UID: "57162945-9d65-4f62-b049-d8e61a06c508"). InnerVolumeSpecName "kube-api-access-zvmkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.331133 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvmkc\" (UniqueName: \"kubernetes.io/projected/57162945-9d65-4f62-b049-d8e61a06c508-kube-api-access-zvmkc\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.343303 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.408544 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" event={"ID":"62f7cad5-612c-4946-8596-c7e5837465a1","Type":"ContainerDied","Data":"4acf420be9733061a3b46a8ebb27a3bd22dcc0828c115f8aa72900a7b5f71b62"} Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.408580 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4acf420be9733061a3b46a8ebb27a3bd22dcc0828c115f8aa72900a7b5f71b62" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.408682 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-edb3-account-create-update-wv7rz" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.438249 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.438648 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4q4b\" (UniqueName: \"kubernetes.io/projected/62f7cad5-612c-4946-8596-c7e5837465a1-kube-api-access-g4q4b\") pod \"62f7cad5-612c-4946-8596-c7e5837465a1\" (UID: \"62f7cad5-612c-4946-8596-c7e5837465a1\") " Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.438796 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7cad5-612c-4946-8596-c7e5837465a1-operator-scripts\") pod \"62f7cad5-612c-4946-8596-c7e5837465a1\" (UID: \"62f7cad5-612c-4946-8596-c7e5837465a1\") " Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.441187 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f7cad5-612c-4946-8596-c7e5837465a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62f7cad5-612c-4946-8596-c7e5837465a1" (UID: "62f7cad5-612c-4946-8596-c7e5837465a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.453079 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f7cad5-612c-4946-8596-c7e5837465a1-kube-api-access-g4q4b" (OuterVolumeSpecName: "kube-api-access-g4q4b") pod "62f7cad5-612c-4946-8596-c7e5837465a1" (UID: "62f7cad5-612c-4946-8596-c7e5837465a1"). InnerVolumeSpecName "kube-api-access-g4q4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.460666 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m4xgs" event={"ID":"57162945-9d65-4f62-b049-d8e61a06c508","Type":"ContainerDied","Data":"b05e2cb3664319765c6bcb177f30cee00cff60f57f6dfd8d59b5c2d58526c6e0"} Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.460701 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b05e2cb3664319765c6bcb177f30cee00cff60f57f6dfd8d59b5c2d58526c6e0" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.460778 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m4xgs" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.481779 4792 generic.go:334] "Generic (PLEG): container finished" podID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerID="6722e62c7746ba2911bc2c90f0a321f49a4ec2e65b514f6314334aea642eaee0" exitCode=0 Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.481804 4792 generic.go:334] "Generic (PLEG): container finished" podID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerID="3362c0a60992eb4278ec30c79cadcf17969283e0a2509c8936579a727b041d25" exitCode=0 Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.481811 4792 generic.go:334] "Generic (PLEG): container finished" podID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerID="bdbf99c4bb4f94f84d18f638be74736db4b076104dab51e068f7301fa70656aa" exitCode=0 Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.482180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerDied","Data":"6722e62c7746ba2911bc2c90f0a321f49a4ec2e65b514f6314334aea642eaee0"} Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.482208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerDied","Data":"3362c0a60992eb4278ec30c79cadcf17969283e0a2509c8936579a727b041d25"} Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.482218 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerDied","Data":"bdbf99c4bb4f94f84d18f638be74736db4b076104dab51e068f7301fa70656aa"} Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.551259 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsqkg\" (UniqueName: \"kubernetes.io/projected/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-kube-api-access-tsqkg\") pod \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\" (UID: \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\") " Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.551350 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-operator-scripts\") pod \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\" (UID: \"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf\") " Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.552253 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf" (UID: "15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.553563 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.553593 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4q4b\" (UniqueName: \"kubernetes.io/projected/62f7cad5-612c-4946-8596-c7e5837465a1-kube-api-access-g4q4b\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.553605 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7cad5-612c-4946-8596-c7e5837465a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.568035 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-kube-api-access-tsqkg" (OuterVolumeSpecName: "kube-api-access-tsqkg") pod "15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf" (UID: "15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf"). InnerVolumeSpecName "kube-api-access-tsqkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:12 crc kubenswrapper[4792]: I0319 17:08:12.656063 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsqkg\" (UniqueName: \"kubernetes.io/projected/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf-kube-api-access-tsqkg\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:13 crc kubenswrapper[4792]: I0319 17:08:13.564640 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wqttw" event={"ID":"15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf","Type":"ContainerDied","Data":"70c85d228a01e3d69ec4a05809131122c442f90152b8455620d4e66055751e7e"} Mar 19 17:08:13 crc kubenswrapper[4792]: I0319 17:08:13.565005 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c85d228a01e3d69ec4a05809131122c442f90152b8455620d4e66055751e7e" Mar 19 17:08:13 crc kubenswrapper[4792]: I0319 17:08:13.565086 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wqttw" Mar 19 17:08:13 crc kubenswrapper[4792]: I0319 17:08:13.958935 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:08:13 crc kubenswrapper[4792]: I0319 17:08:13.965899 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:13 crc kubenswrapper[4792]: I0319 17:08:13.974821 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:13 crc kubenswrapper[4792]: I0319 17:08:13.981219 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.004754 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-operator-scripts\") pod \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\" (UID: \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.004792 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b6a98e-4345-443a-b896-a4b73cda3c34-operator-scripts\") pod \"a2b6a98e-4345-443a-b896-a4b73cda3c34\" (UID: \"a2b6a98e-4345-443a-b896-a4b73cda3c34\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.004849 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-log-httpd\") pod \"ccfb9f12-84fa-412b-900d-d254cf4303dc\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.004868 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft4w7\" (UniqueName: \"kubernetes.io/projected/d4019769-bbd1-4dea-b732-315d331cb7c7-kube-api-access-ft4w7\") pod \"d4019769-bbd1-4dea-b732-315d331cb7c7\" (UID: \"d4019769-bbd1-4dea-b732-315d331cb7c7\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.004959 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-sg-core-conf-yaml\") pod \"ccfb9f12-84fa-412b-900d-d254cf4303dc\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.004977 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-run-httpd\") pod \"ccfb9f12-84fa-412b-900d-d254cf4303dc\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005019 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-combined-ca-bundle\") pod \"ccfb9f12-84fa-412b-900d-d254cf4303dc\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005041 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whtrn\" (UniqueName: \"kubernetes.io/projected/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-kube-api-access-whtrn\") pod \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\" (UID: \"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005081 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-scripts\") pod \"ccfb9f12-84fa-412b-900d-d254cf4303dc\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005104 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk6jj\" (UniqueName: \"kubernetes.io/projected/ccfb9f12-84fa-412b-900d-d254cf4303dc-kube-api-access-qk6jj\") pod \"ccfb9f12-84fa-412b-900d-d254cf4303dc\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005199 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4019769-bbd1-4dea-b732-315d331cb7c7-operator-scripts\") pod \"d4019769-bbd1-4dea-b732-315d331cb7c7\" (UID: \"d4019769-bbd1-4dea-b732-315d331cb7c7\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005251 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cltfq\" (UniqueName: \"kubernetes.io/projected/a2b6a98e-4345-443a-b896-a4b73cda3c34-kube-api-access-cltfq\") pod \"a2b6a98e-4345-443a-b896-a4b73cda3c34\" (UID: \"a2b6a98e-4345-443a-b896-a4b73cda3c34\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005271 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-config-data\") pod \"ccfb9f12-84fa-412b-900d-d254cf4303dc\" (UID: \"ccfb9f12-84fa-412b-900d-d254cf4303dc\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005732 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e" (UID: "1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.005787 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4019769-bbd1-4dea-b732-315d331cb7c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4019769-bbd1-4dea-b732-315d331cb7c7" (UID: "d4019769-bbd1-4dea-b732-315d331cb7c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.006300 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ccfb9f12-84fa-412b-900d-d254cf4303dc" (UID: "ccfb9f12-84fa-412b-900d-d254cf4303dc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.006466 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ccfb9f12-84fa-412b-900d-d254cf4303dc" (UID: "ccfb9f12-84fa-412b-900d-d254cf4303dc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.006531 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b6a98e-4345-443a-b896-a4b73cda3c34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2b6a98e-4345-443a-b896-a4b73cda3c34" (UID: "a2b6a98e-4345-443a-b896-a4b73cda3c34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.021631 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-kube-api-access-whtrn" (OuterVolumeSpecName: "kube-api-access-whtrn") pod "1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e" (UID: "1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e"). InnerVolumeSpecName "kube-api-access-whtrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.021773 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-scripts" (OuterVolumeSpecName: "scripts") pod "ccfb9f12-84fa-412b-900d-d254cf4303dc" (UID: "ccfb9f12-84fa-412b-900d-d254cf4303dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.023196 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4019769-bbd1-4dea-b732-315d331cb7c7-kube-api-access-ft4w7" (OuterVolumeSpecName: "kube-api-access-ft4w7") pod "d4019769-bbd1-4dea-b732-315d331cb7c7" (UID: "d4019769-bbd1-4dea-b732-315d331cb7c7"). InnerVolumeSpecName "kube-api-access-ft4w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.027095 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfb9f12-84fa-412b-900d-d254cf4303dc-kube-api-access-qk6jj" (OuterVolumeSpecName: "kube-api-access-qk6jj") pod "ccfb9f12-84fa-412b-900d-d254cf4303dc" (UID: "ccfb9f12-84fa-412b-900d-d254cf4303dc"). InnerVolumeSpecName "kube-api-access-qk6jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.029906 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b6a98e-4345-443a-b896-a4b73cda3c34-kube-api-access-cltfq" (OuterVolumeSpecName: "kube-api-access-cltfq") pod "a2b6a98e-4345-443a-b896-a4b73cda3c34" (UID: "a2b6a98e-4345-443a-b896-a4b73cda3c34"). InnerVolumeSpecName "kube-api-access-cltfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.108693 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.108742 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft4w7\" (UniqueName: \"kubernetes.io/projected/d4019769-bbd1-4dea-b732-315d331cb7c7-kube-api-access-ft4w7\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.108756 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccfb9f12-84fa-412b-900d-d254cf4303dc-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.108771 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whtrn\" (UniqueName: \"kubernetes.io/projected/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-kube-api-access-whtrn\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.108785 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.108797 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk6jj\" (UniqueName: \"kubernetes.io/projected/ccfb9f12-84fa-412b-900d-d254cf4303dc-kube-api-access-qk6jj\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.108810 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4019769-bbd1-4dea-b732-315d331cb7c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.108824 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cltfq\" (UniqueName: \"kubernetes.io/projected/a2b6a98e-4345-443a-b896-a4b73cda3c34-kube-api-access-cltfq\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.112086 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.112118 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b6a98e-4345-443a-b896-a4b73cda3c34-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.112796 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d5c5d8dc8-z7j2w"] Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.113348 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ccfb9f12-84fa-412b-900d-d254cf4303dc" (UID: "ccfb9f12-84fa-412b-900d-d254cf4303dc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.161343 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccfb9f12-84fa-412b-900d-d254cf4303dc" (UID: "ccfb9f12-84fa-412b-900d-d254cf4303dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.195644 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-config-data" (OuterVolumeSpecName: "config-data") pod "ccfb9f12-84fa-412b-900d-d254cf4303dc" (UID: "ccfb9f12-84fa-412b-900d-d254cf4303dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.215235 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.215534 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.215547 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccfb9f12-84fa-412b-900d-d254cf4303dc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.579966 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-f2c6n"] Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.611723 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fd575b5d8-m4xkm"] Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.635584 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5f7fc4678c-j72pt"] Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.654911 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.656529 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccfb9f12-84fa-412b-900d-d254cf4303dc","Type":"ContainerDied","Data":"7e7d582451f80630905ee4eb7b78481cfd5c70301b9ea94fb997c9e4789d55b1"} Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.656579 4792 scope.go:117] "RemoveContainer" containerID="6722e62c7746ba2911bc2c90f0a321f49a4ec2e65b514f6314334aea642eaee0" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.697432 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.708891 4792 generic.go:334] "Generic (PLEG): container finished" podID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerID="00bf7acbdf98d22fa5bdbb646f387a0b0040a58f9197e80e83d17e17987bcb99" exitCode=0 Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.709315 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c679c588-pcfbf" event={"ID":"69a561ed-717c-43e0-82b3-42bb63bb68b5","Type":"ContainerDied","Data":"00bf7acbdf98d22fa5bdbb646f387a0b0040a58f9197e80e83d17e17987bcb99"} Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.718343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" event={"ID":"db1742b5-7b52-49d1-8dba-f9c27446efb2","Type":"ContainerStarted","Data":"51daa3bd7a5c20c03c0aa64aca9ae0bef3f00a152cf1dd89ef2d94cb749038a0"} Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.718848 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" event={"ID":"db1742b5-7b52-49d1-8dba-f9c27446efb2","Type":"ContainerStarted","Data":"829d0127d6b973c48c31a7734e39f5699788c037dde59ac5b8c0b672d4f03f2e"} Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.719275 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.734487 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-internal-tls-certs\") pod \"69a561ed-717c-43e0-82b3-42bb63bb68b5\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.734602 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-config-data\") pod \"69a561ed-717c-43e0-82b3-42bb63bb68b5\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.734629 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-public-tls-certs\") pod \"69a561ed-717c-43e0-82b3-42bb63bb68b5\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.734713 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkrlw\" (UniqueName: \"kubernetes.io/projected/69a561ed-717c-43e0-82b3-42bb63bb68b5-kube-api-access-wkrlw\") pod \"69a561ed-717c-43e0-82b3-42bb63bb68b5\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.734770 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-scripts\") pod \"69a561ed-717c-43e0-82b3-42bb63bb68b5\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.734887 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69a561ed-717c-43e0-82b3-42bb63bb68b5-logs\") pod \"69a561ed-717c-43e0-82b3-42bb63bb68b5\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.734938 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-combined-ca-bundle\") pod \"69a561ed-717c-43e0-82b3-42bb63bb68b5\" (UID: \"69a561ed-717c-43e0-82b3-42bb63bb68b5\") " Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.742871 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" event={"ID":"1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e","Type":"ContainerDied","Data":"fa9ca56d792e41d9110f0a9788e626781b4d18ac151658eb81fdef6c18939eec"} Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.742911 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa9ca56d792e41d9110f0a9788e626781b4d18ac151658eb81fdef6c18939eec" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.743025 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-82d7-account-create-update-mzkgh" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.778794 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69a561ed-717c-43e0-82b3-42bb63bb68b5-logs" (OuterVolumeSpecName: "logs") pod "69a561ed-717c-43e0-82b3-42bb63bb68b5" (UID: "69a561ed-717c-43e0-82b3-42bb63bb68b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.794915 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a561ed-717c-43e0-82b3-42bb63bb68b5-kube-api-access-wkrlw" (OuterVolumeSpecName: "kube-api-access-wkrlw") pod "69a561ed-717c-43e0-82b3-42bb63bb68b5" (UID: "69a561ed-717c-43e0-82b3-42bb63bb68b5"). InnerVolumeSpecName "kube-api-access-wkrlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.781396 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-819d-account-create-update-nw88m" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.781273 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-819d-account-create-update-nw88m" event={"ID":"d4019769-bbd1-4dea-b732-315d331cb7c7","Type":"ContainerDied","Data":"43282b1c3834823e8caa22bbec23a070e1f87551b0ffc2a50688ea54a463d515"} Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.822412 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43282b1c3834823e8caa22bbec23a070e1f87551b0ffc2a50688ea54a463d515" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.822470 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5k7gr" event={"ID":"a2b6a98e-4345-443a-b896-a4b73cda3c34","Type":"ContainerDied","Data":"ca2f87b4ab7bb203cad42e0259353f57bd27ff756f33bd5a7ea6c35bbe040c92"} Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.822501 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2f87b4ab7bb203cad42e0259353f57bd27ff756f33bd5a7ea6c35bbe040c92" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.813185 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5k7gr" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.828979 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" podStartSLOduration=3.828959382 podStartE2EDuration="3.828959382s" podCreationTimestamp="2026-03-19 17:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:14.778474666 +0000 UTC m=+1657.924532206" watchObservedRunningTime="2026-03-19 17:08:14.828959382 +0000 UTC m=+1657.975016922" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.834097 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-scripts" (OuterVolumeSpecName: "scripts") pod "69a561ed-717c-43e0-82b3-42bb63bb68b5" (UID: "69a561ed-717c-43e0-82b3-42bb63bb68b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.858350 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkrlw\" (UniqueName: \"kubernetes.io/projected/69a561ed-717c-43e0-82b3-42bb63bb68b5-kube-api-access-wkrlw\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.858384 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.858396 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69a561ed-717c-43e0-82b3-42bb63bb68b5-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.905734 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69a561ed-717c-43e0-82b3-42bb63bb68b5" (UID: "69a561ed-717c-43e0-82b3-42bb63bb68b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.933375 4792 scope.go:117] "RemoveContainer" containerID="32b40ff99bd7a1480b078105d8eb835b39aaed9b71326dda966ec4499aa3e2ca" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.960562 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:14 crc kubenswrapper[4792]: I0319 17:08:14.984420 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.001432 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-config-data" (OuterVolumeSpecName: "config-data") pod "69a561ed-717c-43e0-82b3-42bb63bb68b5" (UID: "69a561ed-717c-43e0-82b3-42bb63bb68b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.008933 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "69a561ed-717c-43e0-82b3-42bb63bb68b5" (UID: "69a561ed-717c-43e0-82b3-42bb63bb68b5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.024162 4792 scope.go:117] "RemoveContainer" containerID="3362c0a60992eb4278ec30c79cadcf17969283e0a2509c8936579a727b041d25" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.024302 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052135 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.052748 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="proxy-httpd" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052772 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="proxy-httpd" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.052799 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerName="placement-api" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052807 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerName="placement-api" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.052825 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b6a98e-4345-443a-b896-a4b73cda3c34" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052850 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b6a98e-4345-443a-b896-a4b73cda3c34" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.052866 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052873 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.052884 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052892 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.052912 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="ceilometer-central-agent" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052921 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="ceilometer-central-agent" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.052936 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57162945-9d65-4f62-b049-d8e61a06c508" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052944 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="57162945-9d65-4f62-b049-d8e61a06c508" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.052970 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="ceilometer-notification-agent" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.052980 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="ceilometer-notification-agent" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.053023 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerName="placement-log" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053032 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerName="placement-log" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.053048 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f7cad5-612c-4946-8596-c7e5837465a1" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053057 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f7cad5-612c-4946-8596-c7e5837465a1" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.053073 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4019769-bbd1-4dea-b732-315d331cb7c7" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053081 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4019769-bbd1-4dea-b732-315d331cb7c7" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: E0319 17:08:15.053101 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="sg-core" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053108 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="sg-core" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053341 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4019769-bbd1-4dea-b732-315d331cb7c7" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053351 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053361 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="57162945-9d65-4f62-b049-d8e61a06c508" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053373 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerName="placement-log" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053384 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="sg-core" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053394 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="ceilometer-notification-agent" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053406 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="ceilometer-central-agent" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053413 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" containerName="proxy-httpd" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053422 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053434 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b6a98e-4345-443a-b896-a4b73cda3c34" containerName="mariadb-database-create" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053448 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f7cad5-612c-4946-8596-c7e5837465a1" containerName="mariadb-account-create-update" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.053458 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" containerName="placement-api" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.055714 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.061449 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.061670 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.068719 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.068758 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.068989 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "69a561ed-717c-43e0-82b3-42bb63bb68b5" (UID: "69a561ed-717c-43e0-82b3-42bb63bb68b5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.073880 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.111962 4792 scope.go:117] "RemoveContainer" containerID="bdbf99c4bb4f94f84d18f638be74736db4b076104dab51e068f7301fa70656aa" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.139925 4792 scope.go:117] "RemoveContainer" containerID="00bf7acbdf98d22fa5bdbb646f387a0b0040a58f9197e80e83d17e17987bcb99" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.172980 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsvfj\" (UniqueName: \"kubernetes.io/projected/96125084-cfab-452e-9b96-6643e257344c-kube-api-access-lsvfj\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.174342 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-log-httpd\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.174496 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-scripts\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.174519 4792 scope.go:117] "RemoveContainer" containerID="f16e9c510929366875024e6d0538c492fd1f793c410c58461e058867de95a88b" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.174794 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-config-data\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.174914 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.175045 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.175085 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-run-httpd\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.175266 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a561ed-717c-43e0-82b3-42bb63bb68b5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.277510 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-config-data\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.277593 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.277666 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.277690 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-run-httpd\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.277748 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsvfj\" (UniqueName: \"kubernetes.io/projected/96125084-cfab-452e-9b96-6643e257344c-kube-api-access-lsvfj\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.277796 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-log-httpd\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.277871 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-scripts\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.278543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-run-httpd\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.278580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-log-httpd\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.282966 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-scripts\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.283574 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-config-data\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.285625 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.285795 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.302507 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsvfj\" (UniqueName: \"kubernetes.io/projected/96125084-cfab-452e-9b96-6643e257344c-kube-api-access-lsvfj\") pod \"ceilometer-0\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.410468 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.679018 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="9c0efe25-7ec1-4e80-80c8-812972764179" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.219:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.756024 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfb9f12-84fa-412b-900d-d254cf4303dc" path="/var/lib/kubelet/pods/ccfb9f12-84fa-412b-900d-d254cf4303dc/volumes" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.830729 4792 generic.go:334] "Generic (PLEG): container finished" podID="29b28abf-0997-4ee6-a514-eb15f9955657" containerID="227613be2a88466e43d65a676e693897919aa93283aace6724d4afa3f32b16f7" exitCode=0 Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.830795 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" event={"ID":"29b28abf-0997-4ee6-a514-eb15f9955657","Type":"ContainerDied","Data":"227613be2a88466e43d65a676e693897919aa93283aace6724d4afa3f32b16f7"} Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.830822 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" event={"ID":"29b28abf-0997-4ee6-a514-eb15f9955657","Type":"ContainerStarted","Data":"b7cccfad2b7b8ba585c9070d306b7b19d4aa8eade543d885da91d17e1a24c029"} Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.834119 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c679c588-pcfbf" event={"ID":"69a561ed-717c-43e0-82b3-42bb63bb68b5","Type":"ContainerDied","Data":"1ab6386cd60a635190fec871aa66b6a9aee0bfa3b8d822d3813634c72d48ca38"} Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.834208 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c679c588-pcfbf" Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.840689 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" event={"ID":"0b17f3ca-da31-48bc-b5cf-e41676d6960a","Type":"ContainerStarted","Data":"4702d201c34adde61d12e6864cab8079b2de968725808e71f183c363e0545ef7"} Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.845141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f7fc4678c-j72pt" event={"ID":"cd0a7861-6627-4968-9221-f62a57b41288","Type":"ContainerStarted","Data":"4087f65a6c879347bc3d12678a74578b6aa25fb9fca38bd9ef299fc387652d8f"} Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.904043 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7c679c588-pcfbf"] Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.927468 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7c679c588-pcfbf"] Mar 19 17:08:15 crc kubenswrapper[4792]: I0319 17:08:15.973562 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:16 crc kubenswrapper[4792]: I0319 17:08:16.435500 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 19 17:08:16 crc kubenswrapper[4792]: I0319 17:08:16.871779 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerStarted","Data":"f4e465e174c1fd45de2797153fd7eaa646a8e4dcfa0606891a44f49eacd7ab5e"} Mar 19 17:08:16 crc kubenswrapper[4792]: I0319 17:08:16.874201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" event={"ID":"29b28abf-0997-4ee6-a514-eb15f9955657","Type":"ContainerStarted","Data":"2ecbeefda1f62f3c9a391810e96fef46370a3e881c9a7d6b06ef9b9f36cc20bc"} Mar 19 17:08:16 crc kubenswrapper[4792]: I0319 17:08:16.875990 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:16 crc kubenswrapper[4792]: I0319 17:08:16.881770 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dd0be369-d704-43ad-851a-c7e24798a150","Type":"ContainerStarted","Data":"61659cac33f04ce90c2f57c93cc4d636b1cb113b94cda883dc8f8c03db706fe1"} Mar 19 17:08:16 crc kubenswrapper[4792]: I0319 17:08:16.914312 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" podStartSLOduration=5.914287939 podStartE2EDuration="5.914287939s" podCreationTimestamp="2026-03-19 17:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:16.903206174 +0000 UTC m=+1660.049263714" watchObservedRunningTime="2026-03-19 17:08:16.914287939 +0000 UTC m=+1660.060345489" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.799542 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a561ed-717c-43e0-82b3-42bb63bb68b5" path="/var/lib/kubelet/pods/69a561ed-717c-43e0-82b3-42bb63bb68b5/volumes" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.804494 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7bcd68ccb9-rjwmx"] Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.805992 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.842258 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.842410 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84cdd6c86c-5thrd" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.857223 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7bcd68ccb9-rjwmx"] Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.867444 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mk2\" (UniqueName: \"kubernetes.io/projected/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-kube-api-access-x9mk2\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.867520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.867676 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data-custom\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.867728 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-combined-ca-bundle\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.885482 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-897fbdd64-wsxgg"] Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.887227 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.897897 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-b557f87cf-nrhxz"] Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.899568 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.917415 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-897fbdd64-wsxgg"] Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.927745 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b557f87cf-nrhxz"] Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971449 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data-custom\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971536 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-combined-ca-bundle\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8b6\" (UniqueName: \"kubernetes.io/projected/67aa9bdc-577d-4f0b-9900-9c91da75278a-kube-api-access-xp8b6\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971613 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971629 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data-custom\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971665 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-combined-ca-bundle\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mk2\" (UniqueName: \"kubernetes.io/projected/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-kube-api-access-x9mk2\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971806 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-combined-ca-bundle\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.971887 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.984135 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data-custom\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.984309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmc7x\" (UniqueName: \"kubernetes.io/projected/a71d0910-ac10-4dc4-9e8b-6726c03c9211-kube-api-access-gmc7x\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.995901 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mk2\" (UniqueName: \"kubernetes.io/projected/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-kube-api-access-x9mk2\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:17 crc kubenswrapper[4792]: I0319 17:08:17.996345 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-combined-ca-bundle\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:17.999988 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.001896 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data-custom\") pod \"heat-engine-7bcd68ccb9-rjwmx\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.087010 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data-custom\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.087344 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmc7x\" (UniqueName: \"kubernetes.io/projected/a71d0910-ac10-4dc4-9e8b-6726c03c9211-kube-api-access-gmc7x\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.087569 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp8b6\" (UniqueName: \"kubernetes.io/projected/67aa9bdc-577d-4f0b-9900-9c91da75278a-kube-api-access-xp8b6\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.087650 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.087731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data-custom\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.087857 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-combined-ca-bundle\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.088216 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-combined-ca-bundle\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.093032 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.105448 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-combined-ca-bundle\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.107256 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data-custom\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.107693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data-custom\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.109172 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.110300 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.114031 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-combined-ca-bundle\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.125589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmc7x\" (UniqueName: \"kubernetes.io/projected/a71d0910-ac10-4dc4-9e8b-6726c03c9211-kube-api-access-gmc7x\") pod \"heat-api-b557f87cf-nrhxz\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.129617 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp8b6\" (UniqueName: \"kubernetes.io/projected/67aa9bdc-577d-4f0b-9900-9c91da75278a-kube-api-access-xp8b6\") pod \"heat-cfnapi-897fbdd64-wsxgg\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.131977 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.267119 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.320463 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v9mgf"] Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.321989 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.325302 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.325816 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.334337 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8x4rw" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.345898 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v9mgf"] Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.396763 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.410608 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-config-data\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.410691 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-scripts\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.410778 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxc6l\" (UniqueName: \"kubernetes.io/projected/5027af97-8929-4efd-b9e0-47736ca10da2-kube-api-access-rxc6l\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.410824 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.516900 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.517024 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-config-data\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.517073 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-scripts\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.517155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxc6l\" (UniqueName: \"kubernetes.io/projected/5027af97-8929-4efd-b9e0-47736ca10da2-kube-api-access-rxc6l\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.524592 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-config-data\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.525806 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-scripts\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.533687 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.540851 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxc6l\" (UniqueName: \"kubernetes.io/projected/5027af97-8929-4efd-b9e0-47736ca10da2-kube-api-access-rxc6l\") pod \"nova-cell0-conductor-db-sync-v9mgf\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:18 crc kubenswrapper[4792]: I0319 17:08:18.654515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:08:19 crc kubenswrapper[4792]: I0319 17:08:19.490498 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7bcd68ccb9-rjwmx"] Mar 19 17:08:19 crc kubenswrapper[4792]: W0319 17:08:19.545789 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9318ba4f_8979_46fa_8cb4_e1c12ee94e35.slice/crio-bcb0d54937dfa5dce244670e9fba79a5684432838ffbd2f879950b32903afcf5 WatchSource:0}: Error finding container bcb0d54937dfa5dce244670e9fba79a5684432838ffbd2f879950b32903afcf5: Status 404 returned error can't find the container with id bcb0d54937dfa5dce244670e9fba79a5684432838ffbd2f879950b32903afcf5 Mar 19 17:08:19 crc kubenswrapper[4792]: I0319 17:08:19.764703 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-897fbdd64-wsxgg"] Mar 19 17:08:19 crc kubenswrapper[4792]: W0319 17:08:19.950987 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5027af97_8929_4efd_b9e0_47736ca10da2.slice/crio-570712a2edc1a0dbee1fcca92786feda391c79ea83b0ada395125a9eae12a450 WatchSource:0}: Error finding container 570712a2edc1a0dbee1fcca92786feda391c79ea83b0ada395125a9eae12a450: Status 404 returned error can't find the container with id 570712a2edc1a0dbee1fcca92786feda391c79ea83b0ada395125a9eae12a450 Mar 19 17:08:19 crc kubenswrapper[4792]: I0319 17:08:19.955909 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v9mgf"] Mar 19 17:08:19 crc kubenswrapper[4792]: I0319 17:08:19.970357 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b557f87cf-nrhxz"] Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.035857 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f7fc4678c-j72pt" event={"ID":"cd0a7861-6627-4968-9221-f62a57b41288","Type":"ContainerStarted","Data":"3462db22423d4b68746c6401aefd2df44288731ee176dc8e912ef1cae880219e"} Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.037468 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.040432 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" event={"ID":"5027af97-8929-4efd-b9e0-47736ca10da2","Type":"ContainerStarted","Data":"570712a2edc1a0dbee1fcca92786feda391c79ea83b0ada395125a9eae12a450"} Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.043132 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b557f87cf-nrhxz" event={"ID":"a71d0910-ac10-4dc4-9e8b-6726c03c9211","Type":"ContainerStarted","Data":"d75383e0af55a1428b1332dcc0934619f34bb0672f791e4107c64f38a624ec25"} Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.047941 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" event={"ID":"67aa9bdc-577d-4f0b-9900-9c91da75278a","Type":"ContainerStarted","Data":"d6dec32c1f72c5e69f1155e3391b2103d02dd476bc9c195d7a4ebff149182b5a"} Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.057269 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" event={"ID":"9318ba4f-8979-46fa-8cb4-e1c12ee94e35","Type":"ContainerStarted","Data":"97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3"} Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.057541 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" event={"ID":"9318ba4f-8979-46fa-8cb4-e1c12ee94e35","Type":"ContainerStarted","Data":"bcb0d54937dfa5dce244670e9fba79a5684432838ffbd2f879950b32903afcf5"} Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.057674 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.068099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" event={"ID":"0b17f3ca-da31-48bc-b5cf-e41676d6960a","Type":"ContainerStarted","Data":"1353a1ab8b75a43b116996146b0c635f16a4948b5897bfa25376355993fa4bd6"} Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.069233 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.069804 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5f7fc4678c-j72pt" podStartSLOduration=4.886909356 podStartE2EDuration="9.069778031s" podCreationTimestamp="2026-03-19 17:08:11 +0000 UTC" firstStartedPulling="2026-03-19 17:08:14.652960482 +0000 UTC m=+1657.799018022" lastFinishedPulling="2026-03-19 17:08:18.835829157 +0000 UTC m=+1661.981886697" observedRunningTime="2026-03-19 17:08:20.069566135 +0000 UTC m=+1663.215623685" watchObservedRunningTime="2026-03-19 17:08:20.069778031 +0000 UTC m=+1663.215835571" Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.081145 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerStarted","Data":"00d05aa3b8c197aa882ec643ef4af0eac5d6f5bbf05e8371f7aa851091e9f3a8"} Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.093090 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" podStartSLOduration=3.093068971 podStartE2EDuration="3.093068971s" podCreationTimestamp="2026-03-19 17:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:20.09013109 +0000 UTC m=+1663.236188640" watchObservedRunningTime="2026-03-19 17:08:20.093068971 +0000 UTC m=+1663.239126531" Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.137877 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" podStartSLOduration=4.871374668 podStartE2EDuration="9.13785254s" podCreationTimestamp="2026-03-19 17:08:11 +0000 UTC" firstStartedPulling="2026-03-19 17:08:14.640463138 +0000 UTC m=+1657.786520678" lastFinishedPulling="2026-03-19 17:08:18.90694101 +0000 UTC m=+1662.052998550" observedRunningTime="2026-03-19 17:08:20.105315957 +0000 UTC m=+1663.251373517" watchObservedRunningTime="2026-03-19 17:08:20.13785254 +0000 UTC m=+1663.283910090" Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.536235 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-678f6bc965-29ckw" Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.688689 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-658464b84d-mwf85"] Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.689126 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-658464b84d-mwf85" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerName="neutron-api" containerID="cri-o://9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6" gracePeriod=30 Mar 19 17:08:20 crc kubenswrapper[4792]: I0319 17:08:20.689622 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-658464b84d-mwf85" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerName="neutron-httpd" containerID="cri-o://fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851" gracePeriod=30 Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.136099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b557f87cf-nrhxz" event={"ID":"a71d0910-ac10-4dc4-9e8b-6726c03c9211","Type":"ContainerStarted","Data":"c54e495c06676adc31bb4e0bd8965208e628bec8a878fa20aaf2df507a6f1b62"} Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.137000 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.139825 4792 generic.go:334] "Generic (PLEG): container finished" podID="67aa9bdc-577d-4f0b-9900-9c91da75278a" containerID="6e07faff2878e39c61eecc9886bfc48c9e98b678fd6e1f010bc47eb00701d62f" exitCode=1 Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.140111 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" event={"ID":"67aa9bdc-577d-4f0b-9900-9c91da75278a","Type":"ContainerDied","Data":"6e07faff2878e39c61eecc9886bfc48c9e98b678fd6e1f010bc47eb00701d62f"} Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.140389 4792 scope.go:117] "RemoveContainer" containerID="6e07faff2878e39c61eecc9886bfc48c9e98b678fd6e1f010bc47eb00701d62f" Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.148012 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dd0be369-d704-43ad-851a-c7e24798a150","Type":"ContainerStarted","Data":"c81c68f24b41aeb2aa27803e933e534588b06e1e5cb0aa7bfc40c4ce4ddd9bf6"} Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.155423 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerStarted","Data":"9dba31a3d180846869e38480f3a2226948f195cf9625f8cea42e60fc9c6518be"} Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.157688 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-b557f87cf-nrhxz" podStartSLOduration=4.157672246 podStartE2EDuration="4.157672246s" podCreationTimestamp="2026-03-19 17:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:21.155751183 +0000 UTC m=+1664.301808743" watchObservedRunningTime="2026-03-19 17:08:21.157672246 +0000 UTC m=+1664.303729786" Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.183035 4792 generic.go:334] "Generic (PLEG): container finished" podID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerID="fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851" exitCode=0 Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.183503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658464b84d-mwf85" event={"ID":"2e6b95f1-831d-4dd6-b888-ec93ff45f43a","Type":"ContainerDied","Data":"fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851"} Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.212120 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.458134264 podStartE2EDuration="26.21209858s" podCreationTimestamp="2026-03-19 17:07:55 +0000 UTC" firstStartedPulling="2026-03-19 17:07:57.144119596 +0000 UTC m=+1640.290177136" lastFinishedPulling="2026-03-19 17:08:15.898083922 +0000 UTC m=+1659.044141452" observedRunningTime="2026-03-19 17:08:21.206829465 +0000 UTC m=+1664.352887005" watchObservedRunningTime="2026-03-19 17:08:21.21209858 +0000 UTC m=+1664.358156120" Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.499135 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.500335 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.215:8080/\": dial tcp 10.217.0.215:8080: connect: connection refused" Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.744246 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:08:21 crc kubenswrapper[4792]: E0319 17:08:21.744509 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.841139 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.958751 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vd5bm"] Mar 19 17:08:21 crc kubenswrapper[4792]: I0319 17:08:21.959444 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" podUID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" containerName="dnsmasq-dns" containerID="cri-o://0dd6ea90c2d8cf814a163b0dfc0c67e0114e15ee2b1e225aa0f1b156e57993a4" gracePeriod=10 Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.205131 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" event={"ID":"67aa9bdc-577d-4f0b-9900-9c91da75278a","Type":"ContainerStarted","Data":"11f543830139381a5f20c51ac506c42fc92a2ffad701c3e4ab8b6bf987a490a3"} Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.205493 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.226074 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerStarted","Data":"079127b573bb771ca7ec002f865b64697e9c3f4c18d484333b33e3b5fc9ea6fd"} Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.229771 4792 generic.go:334] "Generic (PLEG): container finished" podID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" containerID="c54e495c06676adc31bb4e0bd8965208e628bec8a878fa20aaf2df507a6f1b62" exitCode=1 Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.230191 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b557f87cf-nrhxz" event={"ID":"a71d0910-ac10-4dc4-9e8b-6726c03c9211","Type":"ContainerDied","Data":"c54e495c06676adc31bb4e0bd8965208e628bec8a878fa20aaf2df507a6f1b62"} Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.231079 4792 scope.go:117] "RemoveContainer" containerID="c54e495c06676adc31bb4e0bd8965208e628bec8a878fa20aaf2df507a6f1b62" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.241606 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" podStartSLOduration=5.241585372 podStartE2EDuration="5.241585372s" podCreationTimestamp="2026-03-19 17:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:22.230165057 +0000 UTC m=+1665.376222597" watchObservedRunningTime="2026-03-19 17:08:22.241585372 +0000 UTC m=+1665.387642902" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.666985 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5f7fc4678c-j72pt"] Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.703467 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5f6467b4f6-xl4lw"] Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.705118 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.714356 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.714630 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.719096 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-fd575b5d8-m4xkm"] Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.754944 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5f6467b4f6-xl4lw"] Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.819307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-combined-ca-bundle\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.819683 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data-custom\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.820243 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.820287 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz6fg\" (UniqueName: \"kubernetes.io/projected/a287def6-0542-42d7-bf64-dca21b2bd57b-kube-api-access-tz6fg\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.820325 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-internal-tls-certs\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.820462 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-public-tls-certs\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.838253 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6c6dcb76d4-jdvrw"] Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.849089 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.852082 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.852135 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.856521 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c6dcb76d4-jdvrw"] Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.923786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data-custom\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.923873 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-combined-ca-bundle\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.923915 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data-custom\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924011 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-internal-tls-certs\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74l96\" (UniqueName: \"kubernetes.io/projected/7a16c447-44d2-4bba-ad99-aa5893891486-kube-api-access-74l96\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924063 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924086 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz6fg\" (UniqueName: \"kubernetes.io/projected/a287def6-0542-42d7-bf64-dca21b2bd57b-kube-api-access-tz6fg\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924103 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-public-tls-certs\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924127 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-internal-tls-certs\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924165 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924471 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-public-tls-certs\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.924545 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-combined-ca-bundle\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.933630 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-combined-ca-bundle\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.937206 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-public-tls-certs\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.937863 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.948521 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz6fg\" (UniqueName: \"kubernetes.io/projected/a287def6-0542-42d7-bf64-dca21b2bd57b-kube-api-access-tz6fg\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.958538 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-internal-tls-certs\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:22 crc kubenswrapper[4792]: I0319 17:08:22.975706 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data-custom\") pod \"heat-api-5f6467b4f6-xl4lw\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.028380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-combined-ca-bundle\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.028437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data-custom\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.028684 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-internal-tls-certs\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.028704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74l96\" (UniqueName: \"kubernetes.io/projected/7a16c447-44d2-4bba-ad99-aa5893891486-kube-api-access-74l96\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.028738 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-public-tls-certs\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.028786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.040225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.043488 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-combined-ca-bundle\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.044505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-public-tls-certs\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.045099 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data-custom\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.045403 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-internal-tls-certs\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.063614 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74l96\" (UniqueName: \"kubernetes.io/projected/7a16c447-44d2-4bba-ad99-aa5893891486-kube-api-access-74l96\") pod \"heat-cfnapi-6c6dcb76d4-jdvrw\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.070447 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.188512 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.283597 4792 generic.go:334] "Generic (PLEG): container finished" podID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" containerID="4f5e2be366bc0cca9e6495b025f3d1b0b021acdd438dfe918bb37869e5ab7e57" exitCode=1 Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.283699 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b557f87cf-nrhxz" event={"ID":"a71d0910-ac10-4dc4-9e8b-6726c03c9211","Type":"ContainerDied","Data":"4f5e2be366bc0cca9e6495b025f3d1b0b021acdd438dfe918bb37869e5ab7e57"} Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.283736 4792 scope.go:117] "RemoveContainer" containerID="c54e495c06676adc31bb4e0bd8965208e628bec8a878fa20aaf2df507a6f1b62" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.284554 4792 scope.go:117] "RemoveContainer" containerID="4f5e2be366bc0cca9e6495b025f3d1b0b021acdd438dfe918bb37869e5ab7e57" Mar 19 17:08:23 crc kubenswrapper[4792]: E0319 17:08:23.284797 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-b557f87cf-nrhxz_openstack(a71d0910-ac10-4dc4-9e8b-6726c03c9211)\"" pod="openstack/heat-api-b557f87cf-nrhxz" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.317522 4792 generic.go:334] "Generic (PLEG): container finished" podID="67aa9bdc-577d-4f0b-9900-9c91da75278a" containerID="11f543830139381a5f20c51ac506c42fc92a2ffad701c3e4ab8b6bf987a490a3" exitCode=1 Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.317641 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" event={"ID":"67aa9bdc-577d-4f0b-9900-9c91da75278a","Type":"ContainerDied","Data":"11f543830139381a5f20c51ac506c42fc92a2ffad701c3e4ab8b6bf987a490a3"} Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.333962 4792 scope.go:117] "RemoveContainer" containerID="11f543830139381a5f20c51ac506c42fc92a2ffad701c3e4ab8b6bf987a490a3" Mar 19 17:08:23 crc kubenswrapper[4792]: E0319 17:08:23.334487 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-897fbdd64-wsxgg_openstack(67aa9bdc-577d-4f0b-9900-9c91da75278a)\"" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.343279 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" containerID="0dd6ea90c2d8cf814a163b0dfc0c67e0114e15ee2b1e225aa0f1b156e57993a4" exitCode=0 Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.344489 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" event={"ID":"6ac9ea57-0d86-4c21-9c31-ee9487da0942","Type":"ContainerDied","Data":"0dd6ea90c2d8cf814a163b0dfc0c67e0114e15ee2b1e225aa0f1b156e57993a4"} Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.344720 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5f7fc4678c-j72pt" podUID="cd0a7861-6627-4968-9221-f62a57b41288" containerName="heat-api" containerID="cri-o://3462db22423d4b68746c6401aefd2df44288731ee176dc8e912ef1cae880219e" gracePeriod=60 Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.345110 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" podUID="0b17f3ca-da31-48bc-b5cf-e41676d6960a" containerName="heat-cfnapi" containerID="cri-o://1353a1ab8b75a43b116996146b0c635f16a4948b5897bfa25376355993fa4bd6" gracePeriod=60 Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.397893 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.398220 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.703403 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.713123 4792 scope.go:117] "RemoveContainer" containerID="6e07faff2878e39c61eecc9886bfc48c9e98b678fd6e1f010bc47eb00701d62f" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.758591 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-config\") pod \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.758660 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-swift-storage-0\") pod \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.758713 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-sb\") pod \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.759081 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdphd\" (UniqueName: \"kubernetes.io/projected/6ac9ea57-0d86-4c21-9c31-ee9487da0942-kube-api-access-xdphd\") pod \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.759323 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-nb\") pod \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.759356 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-svc\") pod \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\" (UID: \"6ac9ea57-0d86-4c21-9c31-ee9487da0942\") " Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.852375 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac9ea57-0d86-4c21-9c31-ee9487da0942-kube-api-access-xdphd" (OuterVolumeSpecName: "kube-api-access-xdphd") pod "6ac9ea57-0d86-4c21-9c31-ee9487da0942" (UID: "6ac9ea57-0d86-4c21-9c31-ee9487da0942"). InnerVolumeSpecName "kube-api-access-xdphd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.868670 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdphd\" (UniqueName: \"kubernetes.io/projected/6ac9ea57-0d86-4c21-9c31-ee9487da0942-kube-api-access-xdphd\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.899098 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ac9ea57-0d86-4c21-9c31-ee9487da0942" (UID: "6ac9ea57-0d86-4c21-9c31-ee9487da0942"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.925682 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-config" (OuterVolumeSpecName: "config") pod "6ac9ea57-0d86-4c21-9c31-ee9487da0942" (UID: "6ac9ea57-0d86-4c21-9c31-ee9487da0942"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.936776 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5f6467b4f6-xl4lw"] Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.971036 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:23 crc kubenswrapper[4792]: I0319 17:08:23.971066 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.011590 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ac9ea57-0d86-4c21-9c31-ee9487da0942" (UID: "6ac9ea57-0d86-4c21-9c31-ee9487da0942"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.030194 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ac9ea57-0d86-4c21-9c31-ee9487da0942" (UID: "6ac9ea57-0d86-4c21-9c31-ee9487da0942"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.058035 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6ac9ea57-0d86-4c21-9c31-ee9487da0942" (UID: "6ac9ea57-0d86-4c21-9c31-ee9487da0942"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.073421 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.073451 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.073462 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac9ea57-0d86-4c21-9c31-ee9487da0942-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.314149 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c6dcb76d4-jdvrw"] Mar 19 17:08:24 crc kubenswrapper[4792]: W0319 17:08:24.316519 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a16c447_44d2_4bba_ad99_aa5893891486.slice/crio-f96f926f026f19334113d842b92a36a1254d83e257c3021a6d0403b328c33ad1 WatchSource:0}: Error finding container f96f926f026f19334113d842b92a36a1254d83e257c3021a6d0403b328c33ad1: Status 404 returned error can't find the container with id f96f926f026f19334113d842b92a36a1254d83e257c3021a6d0403b328c33ad1 Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.396345 4792 generic.go:334] "Generic (PLEG): container finished" podID="0b17f3ca-da31-48bc-b5cf-e41676d6960a" containerID="1353a1ab8b75a43b116996146b0c635f16a4948b5897bfa25376355993fa4bd6" exitCode=0 Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.396718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" event={"ID":"0b17f3ca-da31-48bc-b5cf-e41676d6960a","Type":"ContainerDied","Data":"1353a1ab8b75a43b116996146b0c635f16a4948b5897bfa25376355993fa4bd6"} Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.425012 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerStarted","Data":"32b20247b645b394650638e5d327a94960c29ff55e242205af334beefa4422c0"} Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.425080 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.428834 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd0a7861-6627-4968-9221-f62a57b41288" containerID="3462db22423d4b68746c6401aefd2df44288731ee176dc8e912ef1cae880219e" exitCode=0 Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.428919 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f7fc4678c-j72pt" event={"ID":"cd0a7861-6627-4968-9221-f62a57b41288","Type":"ContainerDied","Data":"3462db22423d4b68746c6401aefd2df44288731ee176dc8e912ef1cae880219e"} Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.443937 4792 scope.go:117] "RemoveContainer" containerID="4f5e2be366bc0cca9e6495b025f3d1b0b021acdd438dfe918bb37869e5ab7e57" Mar 19 17:08:24 crc kubenswrapper[4792]: E0319 17:08:24.444264 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-b557f87cf-nrhxz_openstack(a71d0910-ac10-4dc4-9e8b-6726c03c9211)\"" pod="openstack/heat-api-b557f87cf-nrhxz" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.460053 4792 scope.go:117] "RemoveContainer" containerID="11f543830139381a5f20c51ac506c42fc92a2ffad701c3e4ab8b6bf987a490a3" Mar 19 17:08:24 crc kubenswrapper[4792]: E0319 17:08:24.460314 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-897fbdd64-wsxgg_openstack(67aa9bdc-577d-4f0b-9900-9c91da75278a)\"" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.467379 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.957139444 podStartE2EDuration="10.467362782s" podCreationTimestamp="2026-03-19 17:08:14 +0000 UTC" firstStartedPulling="2026-03-19 17:08:16.010102857 +0000 UTC m=+1659.156160397" lastFinishedPulling="2026-03-19 17:08:23.520326195 +0000 UTC m=+1666.666383735" observedRunningTime="2026-03-19 17:08:24.445314196 +0000 UTC m=+1667.591371736" watchObservedRunningTime="2026-03-19 17:08:24.467362782 +0000 UTC m=+1667.613420322" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.484772 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" event={"ID":"6ac9ea57-0d86-4c21-9c31-ee9487da0942","Type":"ContainerDied","Data":"1e2d82ba96fa33ad81ece60bb2976ff5da025c7b0b6b3152b9b0d851bb230bf6"} Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.484830 4792 scope.go:117] "RemoveContainer" containerID="0dd6ea90c2d8cf814a163b0dfc0c67e0114e15ee2b1e225aa0f1b156e57993a4" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.484985 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-vd5bm" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.499056 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" event={"ID":"7a16c447-44d2-4bba-ad99-aa5893891486","Type":"ContainerStarted","Data":"f96f926f026f19334113d842b92a36a1254d83e257c3021a6d0403b328c33ad1"} Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.507607 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f6467b4f6-xl4lw" event={"ID":"a287def6-0542-42d7-bf64-dca21b2bd57b","Type":"ContainerStarted","Data":"4487a7f3ecc86a47b3f11ac9a2d44ddc03f3fcf77efa79023f10f34aa21c6ec8"} Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.507762 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.539826 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5f6467b4f6-xl4lw" podStartSLOduration=2.539807021 podStartE2EDuration="2.539807021s" podCreationTimestamp="2026-03-19 17:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:24.522177727 +0000 UTC m=+1667.668235267" watchObservedRunningTime="2026-03-19 17:08:24.539807021 +0000 UTC m=+1667.685864561" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.553559 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.566027 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vd5bm"] Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.571453 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.587791 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vd5bm"] Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.590427 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-combined-ca-bundle\") pod \"cd0a7861-6627-4968-9221-f62a57b41288\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.590476 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data-custom\") pod \"cd0a7861-6627-4968-9221-f62a57b41288\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.590568 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data\") pod \"cd0a7861-6627-4968-9221-f62a57b41288\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.590586 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9cw2\" (UniqueName: \"kubernetes.io/projected/cd0a7861-6627-4968-9221-f62a57b41288-kube-api-access-t9cw2\") pod \"cd0a7861-6627-4968-9221-f62a57b41288\" (UID: \"cd0a7861-6627-4968-9221-f62a57b41288\") " Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.591030 4792 scope.go:117] "RemoveContainer" containerID="b9dcfb4c7a807e84d74e35aa0197aa75de1ab3e5548b49187da33dca2160c272" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.598007 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd0a7861-6627-4968-9221-f62a57b41288" (UID: "cd0a7861-6627-4968-9221-f62a57b41288"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.613550 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0a7861-6627-4968-9221-f62a57b41288-kube-api-access-t9cw2" (OuterVolumeSpecName: "kube-api-access-t9cw2") pod "cd0a7861-6627-4968-9221-f62a57b41288" (UID: "cd0a7861-6627-4968-9221-f62a57b41288"). InnerVolumeSpecName "kube-api-access-t9cw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.651204 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd0a7861-6627-4968-9221-f62a57b41288" (UID: "cd0a7861-6627-4968-9221-f62a57b41288"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.702166 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzq2k\" (UniqueName: \"kubernetes.io/projected/0b17f3ca-da31-48bc-b5cf-e41676d6960a-kube-api-access-vzq2k\") pod \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.702251 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-combined-ca-bundle\") pod \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.702269 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data\") pod \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.702331 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data-custom\") pod \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\" (UID: \"0b17f3ca-da31-48bc-b5cf-e41676d6960a\") " Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.706634 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.706666 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.706679 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9cw2\" (UniqueName: \"kubernetes.io/projected/cd0a7861-6627-4968-9221-f62a57b41288-kube-api-access-t9cw2\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.711103 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0b17f3ca-da31-48bc-b5cf-e41676d6960a" (UID: "0b17f3ca-da31-48bc-b5cf-e41676d6960a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.719537 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b17f3ca-da31-48bc-b5cf-e41676d6960a-kube-api-access-vzq2k" (OuterVolumeSpecName: "kube-api-access-vzq2k") pod "0b17f3ca-da31-48bc-b5cf-e41676d6960a" (UID: "0b17f3ca-da31-48bc-b5cf-e41676d6960a"). InnerVolumeSpecName "kube-api-access-vzq2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.753999 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b17f3ca-da31-48bc-b5cf-e41676d6960a" (UID: "0b17f3ca-da31-48bc-b5cf-e41676d6960a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.758529 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data" (OuterVolumeSpecName: "config-data") pod "cd0a7861-6627-4968-9221-f62a57b41288" (UID: "cd0a7861-6627-4968-9221-f62a57b41288"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.809966 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzq2k\" (UniqueName: \"kubernetes.io/projected/0b17f3ca-da31-48bc-b5cf-e41676d6960a-kube-api-access-vzq2k\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.810023 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.810047 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.810058 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0a7861-6627-4968-9221-f62a57b41288-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.816024 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data" (OuterVolumeSpecName: "config-data") pod "0b17f3ca-da31-48bc-b5cf-e41676d6960a" (UID: "0b17f3ca-da31-48bc-b5cf-e41676d6960a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:24 crc kubenswrapper[4792]: I0319 17:08:24.911780 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b17f3ca-da31-48bc-b5cf-e41676d6960a-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.532309 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" event={"ID":"7a16c447-44d2-4bba-ad99-aa5893891486","Type":"ContainerStarted","Data":"42f1fd9797cd24239c8bab1e7501926cf12f9cce920f9d94c0561ac1f7227878"} Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.532959 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.537262 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f6467b4f6-xl4lw" event={"ID":"a287def6-0542-42d7-bf64-dca21b2bd57b","Type":"ContainerStarted","Data":"0f5a4c19f982a0989895ea3de8070e105bb11f88f371c336cdcaf224a07e8247"} Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.543184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" event={"ID":"0b17f3ca-da31-48bc-b5cf-e41676d6960a","Type":"ContainerDied","Data":"4702d201c34adde61d12e6864cab8079b2de968725808e71f183c363e0545ef7"} Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.543246 4792 scope.go:117] "RemoveContainer" containerID="1353a1ab8b75a43b116996146b0c635f16a4948b5897bfa25376355993fa4bd6" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.543375 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fd575b5d8-m4xkm" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.560298 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" podStartSLOduration=3.560274825 podStartE2EDuration="3.560274825s" podCreationTimestamp="2026-03-19 17:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:25.54992693 +0000 UTC m=+1668.695984470" watchObservedRunningTime="2026-03-19 17:08:25.560274825 +0000 UTC m=+1668.706332365" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.569859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f7fc4678c-j72pt" event={"ID":"cd0a7861-6627-4968-9221-f62a57b41288","Type":"ContainerDied","Data":"4087f65a6c879347bc3d12678a74578b6aa25fb9fca38bd9ef299fc387652d8f"} Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.570003 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f7fc4678c-j72pt" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.571018 4792 scope.go:117] "RemoveContainer" containerID="4f5e2be366bc0cca9e6495b025f3d1b0b021acdd438dfe918bb37869e5ab7e57" Mar 19 17:08:25 crc kubenswrapper[4792]: E0319 17:08:25.571670 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-b557f87cf-nrhxz_openstack(a71d0910-ac10-4dc4-9e8b-6726c03c9211)\"" pod="openstack/heat-api-b557f87cf-nrhxz" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.595260 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r85rk"] Mar 19 17:08:25 crc kubenswrapper[4792]: E0319 17:08:25.596345 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" containerName="init" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.596367 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" containerName="init" Mar 19 17:08:25 crc kubenswrapper[4792]: E0319 17:08:25.596393 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" containerName="dnsmasq-dns" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.596401 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" containerName="dnsmasq-dns" Mar 19 17:08:25 crc kubenswrapper[4792]: E0319 17:08:25.596428 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b17f3ca-da31-48bc-b5cf-e41676d6960a" containerName="heat-cfnapi" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.596435 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b17f3ca-da31-48bc-b5cf-e41676d6960a" containerName="heat-cfnapi" Mar 19 17:08:25 crc kubenswrapper[4792]: E0319 17:08:25.596458 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0a7861-6627-4968-9221-f62a57b41288" containerName="heat-api" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.596465 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0a7861-6627-4968-9221-f62a57b41288" containerName="heat-api" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.596955 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b17f3ca-da31-48bc-b5cf-e41676d6960a" containerName="heat-cfnapi" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.597014 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0a7861-6627-4968-9221-f62a57b41288" containerName="heat-api" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.597039 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" containerName="dnsmasq-dns" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.630053 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.638404 4792 scope.go:117] "RemoveContainer" containerID="3462db22423d4b68746c6401aefd2df44288731ee176dc8e912ef1cae880219e" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.696687 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r85rk"] Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.727728 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-fd575b5d8-m4xkm"] Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.732537 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9z9j\" (UniqueName: \"kubernetes.io/projected/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-kube-api-access-w9z9j\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.732634 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-catalog-content\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.732919 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-utilities\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.738974 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-fd575b5d8-m4xkm"] Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.764248 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b17f3ca-da31-48bc-b5cf-e41676d6960a" path="/var/lib/kubelet/pods/0b17f3ca-da31-48bc-b5cf-e41676d6960a/volumes" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.767447 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac9ea57-0d86-4c21-9c31-ee9487da0942" path="/var/lib/kubelet/pods/6ac9ea57-0d86-4c21-9c31-ee9487da0942/volumes" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.782389 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5f7fc4678c-j72pt"] Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.796680 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5f7fc4678c-j72pt"] Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.836531 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-utilities\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.836674 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9z9j\" (UniqueName: \"kubernetes.io/projected/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-kube-api-access-w9z9j\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.836798 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-catalog-content\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.838700 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-utilities\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.839267 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-catalog-content\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.861621 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9z9j\" (UniqueName: \"kubernetes.io/projected/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-kube-api-access-w9z9j\") pod \"certified-operators-r85rk\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:25 crc kubenswrapper[4792]: I0319 17:08:25.996545 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:26 crc kubenswrapper[4792]: I0319 17:08:26.670466 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r85rk"] Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.118925 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.208363 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.483124 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.549350 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.549588 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="ceilometer-central-agent" containerID="cri-o://00d05aa3b8c197aa882ec643ef4af0eac5d6f5bbf05e8371f7aa851091e9f3a8" gracePeriod=30 Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.550096 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="proxy-httpd" containerID="cri-o://32b20247b645b394650638e5d327a94960c29ff55e242205af334beefa4422c0" gracePeriod=30 Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.550144 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="sg-core" containerID="cri-o://079127b573bb771ca7ec002f865b64697e9c3f4c18d484333b33e3b5fc9ea6fd" gracePeriod=30 Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.550195 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="ceilometer-notification-agent" containerID="cri-o://9dba31a3d180846869e38480f3a2226948f195cf9625f8cea42e60fc9c6518be" gracePeriod=30 Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.599271 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-config\") pod \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.599919 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqgh2\" (UniqueName: \"kubernetes.io/projected/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-kube-api-access-fqgh2\") pod \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.600014 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-ovndb-tls-certs\") pod \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.600087 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-httpd-config\") pod \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.600201 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-combined-ca-bundle\") pod \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\" (UID: \"2e6b95f1-831d-4dd6-b888-ec93ff45f43a\") " Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.608181 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-kube-api-access-fqgh2" (OuterVolumeSpecName: "kube-api-access-fqgh2") pod "2e6b95f1-831d-4dd6-b888-ec93ff45f43a" (UID: "2e6b95f1-831d-4dd6-b888-ec93ff45f43a"). InnerVolumeSpecName "kube-api-access-fqgh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.608414 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2e6b95f1-831d-4dd6-b888-ec93ff45f43a" (UID: "2e6b95f1-831d-4dd6-b888-ec93ff45f43a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.704314 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqgh2\" (UniqueName: \"kubernetes.io/projected/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-kube-api-access-fqgh2\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.704347 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.710196 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-config" (OuterVolumeSpecName: "config") pod "2e6b95f1-831d-4dd6-b888-ec93ff45f43a" (UID: "2e6b95f1-831d-4dd6-b888-ec93ff45f43a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.720589 4792 generic.go:334] "Generic (PLEG): container finished" podID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerID="9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6" exitCode=0 Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.720673 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658464b84d-mwf85" event={"ID":"2e6b95f1-831d-4dd6-b888-ec93ff45f43a","Type":"ContainerDied","Data":"9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6"} Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.720699 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658464b84d-mwf85" event={"ID":"2e6b95f1-831d-4dd6-b888-ec93ff45f43a","Type":"ContainerDied","Data":"160d242e941f79761ccbdccca8c891f06fe7f4458805f8dfca8317fed32a4a3f"} Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.720715 4792 scope.go:117] "RemoveContainer" containerID="fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.720853 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-658464b84d-mwf85" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.753977 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2e6b95f1-831d-4dd6-b888-ec93ff45f43a" (UID: "2e6b95f1-831d-4dd6-b888-ec93ff45f43a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.776055 4792 generic.go:334] "Generic (PLEG): container finished" podID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerID="9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041" exitCode=0 Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.776495 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="cinder-scheduler" containerID="cri-o://61659cac33f04ce90c2f57c93cc4d636b1cb113b94cda883dc8f8c03db706fe1" gracePeriod=30 Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.777407 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="probe" containerID="cri-o://c81c68f24b41aeb2aa27803e933e534588b06e1e5cb0aa7bfc40c4ce4ddd9bf6" gracePeriod=30 Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.787376 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e6b95f1-831d-4dd6-b888-ec93ff45f43a" (UID: "2e6b95f1-831d-4dd6-b888-ec93ff45f43a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.787861 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0a7861-6627-4968-9221-f62a57b41288" path="/var/lib/kubelet/pods/cd0a7861-6627-4968-9221-f62a57b41288/volumes" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.789830 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r85rk" event={"ID":"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31","Type":"ContainerDied","Data":"9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041"} Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.789911 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r85rk" event={"ID":"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31","Type":"ContainerStarted","Data":"e04b9d69efaa8a63c184d6b4c27d780e3187f26606e1702154358ca7b22ce932"} Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.808679 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.808719 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.808732 4792 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e6b95f1-831d-4dd6-b888-ec93ff45f43a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.845871 4792 scope.go:117] "RemoveContainer" containerID="9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.873663 4792 scope.go:117] "RemoveContainer" containerID="fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851" Mar 19 17:08:27 crc kubenswrapper[4792]: E0319 17:08:27.874089 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851\": container with ID starting with fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851 not found: ID does not exist" containerID="fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.874123 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851"} err="failed to get container status \"fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851\": rpc error: code = NotFound desc = could not find container \"fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851\": container with ID starting with fd50545684833623801c94d87634b6212c650152bbcb312fdcf9cbedd53b4851 not found: ID does not exist" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.874142 4792 scope.go:117] "RemoveContainer" containerID="9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6" Mar 19 17:08:27 crc kubenswrapper[4792]: E0319 17:08:27.874332 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6\": container with ID starting with 9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6 not found: ID does not exist" containerID="9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6" Mar 19 17:08:27 crc kubenswrapper[4792]: I0319 17:08:27.874446 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6"} err="failed to get container status \"9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6\": rpc error: code = NotFound desc = could not find container \"9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6\": container with ID starting with 9c08e00d6bf2e0da0239d6cffd6022a7335edd7453f1be188936596738d39ef6 not found: ID does not exist" Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.087772 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-658464b84d-mwf85"] Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.156882 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-658464b84d-mwf85"] Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.268303 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.269187 4792 scope.go:117] "RemoveContainer" containerID="11f543830139381a5f20c51ac506c42fc92a2ffad701c3e4ab8b6bf987a490a3" Mar 19 17:08:28 crc kubenswrapper[4792]: E0319 17:08:28.269523 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-897fbdd64-wsxgg_openstack(67aa9bdc-577d-4f0b-9900-9c91da75278a)\"" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.795596 4792 generic.go:334] "Generic (PLEG): container finished" podID="96125084-cfab-452e-9b96-6643e257344c" containerID="32b20247b645b394650638e5d327a94960c29ff55e242205af334beefa4422c0" exitCode=0 Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.795953 4792 generic.go:334] "Generic (PLEG): container finished" podID="96125084-cfab-452e-9b96-6643e257344c" containerID="079127b573bb771ca7ec002f865b64697e9c3f4c18d484333b33e3b5fc9ea6fd" exitCode=2 Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.795961 4792 generic.go:334] "Generic (PLEG): container finished" podID="96125084-cfab-452e-9b96-6643e257344c" containerID="9dba31a3d180846869e38480f3a2226948f195cf9625f8cea42e60fc9c6518be" exitCode=0 Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.795998 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerDied","Data":"32b20247b645b394650638e5d327a94960c29ff55e242205af334beefa4422c0"} Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.796023 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerDied","Data":"079127b573bb771ca7ec002f865b64697e9c3f4c18d484333b33e3b5fc9ea6fd"} Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.796033 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerDied","Data":"9dba31a3d180846869e38480f3a2226948f195cf9625f8cea42e60fc9c6518be"} Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.806374 4792 generic.go:334] "Generic (PLEG): container finished" podID="dd0be369-d704-43ad-851a-c7e24798a150" containerID="c81c68f24b41aeb2aa27803e933e534588b06e1e5cb0aa7bfc40c4ce4ddd9bf6" exitCode=0 Mar 19 17:08:28 crc kubenswrapper[4792]: I0319 17:08:28.806415 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dd0be369-d704-43ad-851a-c7e24798a150","Type":"ContainerDied","Data":"c81c68f24b41aeb2aa27803e933e534588b06e1e5cb0aa7bfc40c4ce4ddd9bf6"} Mar 19 17:08:29 crc kubenswrapper[4792]: I0319 17:08:29.756433 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" path="/var/lib/kubelet/pods/2e6b95f1-831d-4dd6-b888-ec93ff45f43a/volumes" Mar 19 17:08:29 crc kubenswrapper[4792]: I0319 17:08:29.821459 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r85rk" event={"ID":"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31","Type":"ContainerStarted","Data":"ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2"} Mar 19 17:08:30 crc kubenswrapper[4792]: I0319 17:08:30.857079 4792 generic.go:334] "Generic (PLEG): container finished" podID="96125084-cfab-452e-9b96-6643e257344c" containerID="00d05aa3b8c197aa882ec643ef4af0eac5d6f5bbf05e8371f7aa851091e9f3a8" exitCode=0 Mar 19 17:08:30 crc kubenswrapper[4792]: I0319 17:08:30.857130 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerDied","Data":"00d05aa3b8c197aa882ec643ef4af0eac5d6f5bbf05e8371f7aa851091e9f3a8"} Mar 19 17:08:31 crc kubenswrapper[4792]: I0319 17:08:31.599535 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:31 crc kubenswrapper[4792]: I0319 17:08:31.877355 4792 generic.go:334] "Generic (PLEG): container finished" podID="dd0be369-d704-43ad-851a-c7e24798a150" containerID="61659cac33f04ce90c2f57c93cc4d636b1cb113b94cda883dc8f8c03db706fe1" exitCode=0 Mar 19 17:08:31 crc kubenswrapper[4792]: I0319 17:08:31.877537 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dd0be369-d704-43ad-851a-c7e24798a150","Type":"ContainerDied","Data":"61659cac33f04ce90c2f57c93cc4d636b1cb113b94cda883dc8f8c03db706fe1"} Mar 19 17:08:31 crc kubenswrapper[4792]: I0319 17:08:31.883400 4792 generic.go:334] "Generic (PLEG): container finished" podID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerID="ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2" exitCode=0 Mar 19 17:08:31 crc kubenswrapper[4792]: I0319 17:08:31.883443 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r85rk" event={"ID":"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31","Type":"ContainerDied","Data":"ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2"} Mar 19 17:08:35 crc kubenswrapper[4792]: I0319 17:08:35.740310 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:08:35 crc kubenswrapper[4792]: E0319 17:08:35.741056 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:08:36 crc kubenswrapper[4792]: I0319 17:08:36.168421 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:08:36 crc kubenswrapper[4792]: I0319 17:08:36.263077 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-897fbdd64-wsxgg"] Mar 19 17:08:36 crc kubenswrapper[4792]: I0319 17:08:36.319240 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:08:36 crc kubenswrapper[4792]: I0319 17:08:36.406247 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b557f87cf-nrhxz"] Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.508788 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.632653 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data-custom\") pod \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.632773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmc7x\" (UniqueName: \"kubernetes.io/projected/a71d0910-ac10-4dc4-9e8b-6726c03c9211-kube-api-access-gmc7x\") pod \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.632987 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-combined-ca-bundle\") pod \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.633036 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data\") pod \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\" (UID: \"a71d0910-ac10-4dc4-9e8b-6726c03c9211\") " Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.641980 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71d0910-ac10-4dc4-9e8b-6726c03c9211-kube-api-access-gmc7x" (OuterVolumeSpecName: "kube-api-access-gmc7x") pod "a71d0910-ac10-4dc4-9e8b-6726c03c9211" (UID: "a71d0910-ac10-4dc4-9e8b-6726c03c9211"). InnerVolumeSpecName "kube-api-access-gmc7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.642186 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a71d0910-ac10-4dc4-9e8b-6726c03c9211" (UID: "a71d0910-ac10-4dc4-9e8b-6726c03c9211"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.735018 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a71d0910-ac10-4dc4-9e8b-6726c03c9211" (UID: "a71d0910-ac10-4dc4-9e8b-6726c03c9211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.736655 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.736686 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmc7x\" (UniqueName: \"kubernetes.io/projected/a71d0910-ac10-4dc4-9e8b-6726c03c9211-kube-api-access-gmc7x\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.736697 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.808983 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data" (OuterVolumeSpecName: "config-data") pod "a71d0910-ac10-4dc4-9e8b-6726c03c9211" (UID: "a71d0910-ac10-4dc4-9e8b-6726c03c9211"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.839659 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71d0910-ac10-4dc4-9e8b-6726c03c9211-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.967632 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b557f87cf-nrhxz" event={"ID":"a71d0910-ac10-4dc4-9e8b-6726c03c9211","Type":"ContainerDied","Data":"d75383e0af55a1428b1332dcc0934619f34bb0672f791e4107c64f38a624ec25"} Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.967683 4792 scope.go:117] "RemoveContainer" containerID="4f5e2be366bc0cca9e6495b025f3d1b0b021acdd438dfe918bb37869e5ab7e57" Mar 19 17:08:37 crc kubenswrapper[4792]: I0319 17:08:37.967685 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b557f87cf-nrhxz" Mar 19 17:08:38 crc kubenswrapper[4792]: I0319 17:08:38.007669 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b557f87cf-nrhxz"] Mar 19 17:08:38 crc kubenswrapper[4792]: I0319 17:08:38.018717 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-b557f87cf-nrhxz"] Mar 19 17:08:38 crc kubenswrapper[4792]: I0319 17:08:38.174021 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:08:38 crc kubenswrapper[4792]: I0319 17:08:38.248654 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d5c5d8dc8-z7j2w"] Mar 19 17:08:38 crc kubenswrapper[4792]: I0319 17:08:38.248947 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" podUID="db1742b5-7b52-49d1-8dba-f9c27446efb2" containerName="heat-engine" containerID="cri-o://51daa3bd7a5c20c03c0aa64aca9ae0bef3f00a152cf1dd89ef2d94cb749038a0" gracePeriod=60 Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.127874 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmhr"] Mar 19 17:08:39 crc kubenswrapper[4792]: E0319 17:08:39.128405 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerName="neutron-api" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.128418 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerName="neutron-api" Mar 19 17:08:39 crc kubenswrapper[4792]: E0319 17:08:39.128438 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerName="neutron-httpd" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.128447 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerName="neutron-httpd" Mar 19 17:08:39 crc kubenswrapper[4792]: E0319 17:08:39.128478 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" containerName="heat-api" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.128486 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" containerName="heat-api" Mar 19 17:08:39 crc kubenswrapper[4792]: E0319 17:08:39.128500 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" containerName="heat-api" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.128505 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" containerName="heat-api" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.128720 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" containerName="heat-api" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.128731 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerName="neutron-api" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.128746 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" containerName="heat-api" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.128757 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6b95f1-831d-4dd6-b888-ec93ff45f43a" containerName="neutron-httpd" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.130606 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.156331 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmhr"] Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.276274 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-utilities\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.276625 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-catalog-content\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.276779 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwgft\" (UniqueName: \"kubernetes.io/projected/61fb6bd8-1309-411f-b8bc-c8272384de52-kube-api-access-mwgft\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.379372 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwgft\" (UniqueName: \"kubernetes.io/projected/61fb6bd8-1309-411f-b8bc-c8272384de52-kube-api-access-mwgft\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.379558 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-utilities\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.379605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-catalog-content\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.380263 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-catalog-content\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.380554 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-utilities\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.410199 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwgft\" (UniqueName: \"kubernetes.io/projected/61fb6bd8-1309-411f-b8bc-c8272384de52-kube-api-access-mwgft\") pod \"redhat-marketplace-2bmhr\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.480871 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:39 crc kubenswrapper[4792]: I0319 17:08:39.755501 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a71d0910-ac10-4dc4-9e8b-6726c03c9211" path="/var/lib/kubelet/pods/a71d0910-ac10-4dc4-9e8b-6726c03c9211/volumes" Mar 19 17:08:41 crc kubenswrapper[4792]: E0319 17:08:41.568927 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51daa3bd7a5c20c03c0aa64aca9ae0bef3f00a152cf1dd89ef2d94cb749038a0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:08:41 crc kubenswrapper[4792]: E0319 17:08:41.573246 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51daa3bd7a5c20c03c0aa64aca9ae0bef3f00a152cf1dd89ef2d94cb749038a0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:08:41 crc kubenswrapper[4792]: E0319 17:08:41.574588 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="51daa3bd7a5c20c03c0aa64aca9ae0bef3f00a152cf1dd89ef2d94cb749038a0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:08:41 crc kubenswrapper[4792]: E0319 17:08:41.574625 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" podUID="db1742b5-7b52-49d1-8dba-f9c27446efb2" containerName="heat-engine" Mar 19 17:08:44 crc kubenswrapper[4792]: I0319 17:08:44.651491 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:08:44 crc kubenswrapper[4792]: I0319 17:08:44.652252 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-log" containerID="cri-o://0202957be291e371b746b6e8b63164def65e04a55e4792e0048049b0a584e935" gracePeriod=30 Mar 19 17:08:44 crc kubenswrapper[4792]: I0319 17:08:44.652341 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-httpd" containerID="cri-o://d7642887d1fb972ae5659d67a11e7271a301b5230392701b116dad0b2817e27a" gracePeriod=30 Mar 19 17:08:45 crc kubenswrapper[4792]: I0319 17:08:45.098833 4792 generic.go:334] "Generic (PLEG): container finished" podID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerID="0202957be291e371b746b6e8b63164def65e04a55e4792e0048049b0a584e935" exitCode=143 Mar 19 17:08:45 crc kubenswrapper[4792]: I0319 17:08:45.098931 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f69ff164-0421-4131-92f1-88b1dbbac7d3","Type":"ContainerDied","Data":"0202957be291e371b746b6e8b63164def65e04a55e4792e0048049b0a584e935"} Mar 19 17:08:45 crc kubenswrapper[4792]: I0319 17:08:45.101384 4792 generic.go:334] "Generic (PLEG): container finished" podID="db1742b5-7b52-49d1-8dba-f9c27446efb2" containerID="51daa3bd7a5c20c03c0aa64aca9ae0bef3f00a152cf1dd89ef2d94cb749038a0" exitCode=0 Mar 19 17:08:45 crc kubenswrapper[4792]: I0319 17:08:45.101421 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" event={"ID":"db1742b5-7b52-49d1-8dba-f9c27446efb2","Type":"ContainerDied","Data":"51daa3bd7a5c20c03c0aa64aca9ae0bef3f00a152cf1dd89ef2d94cb749038a0"} Mar 19 17:08:45 crc kubenswrapper[4792]: I0319 17:08:45.836322 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:08:45 crc kubenswrapper[4792]: I0319 17:08:45.837050 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-log" containerID="cri-o://482f902e126b3cd1ae01a7f21ff6a7aacd4f8c12e2b1de995d82188875ae87cb" gracePeriod=30 Mar 19 17:08:45 crc kubenswrapper[4792]: I0319 17:08:45.837086 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-httpd" containerID="cri-o://d0c26199bfce4450860cca013e45a244323038f5192444cb572f50e0cde2e9b8" gracePeriod=30 Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.116685 4792 generic.go:334] "Generic (PLEG): container finished" podID="da737c96-aa96-4a26-8fe5-33778519b02d" containerID="482f902e126b3cd1ae01a7f21ff6a7aacd4f8c12e2b1de995d82188875ae87cb" exitCode=143 Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.116745 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da737c96-aa96-4a26-8fe5-33778519b02d","Type":"ContainerDied","Data":"482f902e126b3cd1ae01a7f21ff6a7aacd4f8c12e2b1de995d82188875ae87cb"} Mar 19 17:08:46 crc kubenswrapper[4792]: E0319 17:08:46.165176 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Mar 19 17:08:46 crc kubenswrapper[4792]: E0319 17:08:46.165322 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfch58bh64bh55h8dh575h5f8h568h689h67chf6h58fh5cfh599h5dbh75h648h666h5dbh67dh7h64bh675h575hb6h699h689h7fhf4hb6h656h55fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sjst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(d7885af7-09a3-4ea4-b59f-2de96f42fd0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:08:46 crc kubenswrapper[4792]: E0319 17:08:46.166648 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="d7885af7-09a3-4ea4-b59f-2de96f42fd0b" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.495333 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.497630 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.558833 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-scripts\") pod \"dd0be369-d704-43ad-851a-c7e24798a150\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559246 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data\") pod \"67aa9bdc-577d-4f0b-9900-9c91da75278a\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559404 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data\") pod \"dd0be369-d704-43ad-851a-c7e24798a150\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559464 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-combined-ca-bundle\") pod \"dd0be369-d704-43ad-851a-c7e24798a150\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559560 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nps8\" (UniqueName: \"kubernetes.io/projected/dd0be369-d704-43ad-851a-c7e24798a150-kube-api-access-5nps8\") pod \"dd0be369-d704-43ad-851a-c7e24798a150\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559592 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-combined-ca-bundle\") pod \"67aa9bdc-577d-4f0b-9900-9c91da75278a\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559672 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dd0be369-d704-43ad-851a-c7e24798a150-etc-machine-id\") pod \"dd0be369-d704-43ad-851a-c7e24798a150\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559725 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data-custom\") pod \"67aa9bdc-577d-4f0b-9900-9c91da75278a\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559785 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp8b6\" (UniqueName: \"kubernetes.io/projected/67aa9bdc-577d-4f0b-9900-9c91da75278a-kube-api-access-xp8b6\") pod \"67aa9bdc-577d-4f0b-9900-9c91da75278a\" (UID: \"67aa9bdc-577d-4f0b-9900-9c91da75278a\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.559863 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data-custom\") pod \"dd0be369-d704-43ad-851a-c7e24798a150\" (UID: \"dd0be369-d704-43ad-851a-c7e24798a150\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.561762 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd0be369-d704-43ad-851a-c7e24798a150-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dd0be369-d704-43ad-851a-c7e24798a150" (UID: "dd0be369-d704-43ad-851a-c7e24798a150"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.578594 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dd0be369-d704-43ad-851a-c7e24798a150" (UID: "dd0be369-d704-43ad-851a-c7e24798a150"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.585099 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67aa9bdc-577d-4f0b-9900-9c91da75278a-kube-api-access-xp8b6" (OuterVolumeSpecName: "kube-api-access-xp8b6") pod "67aa9bdc-577d-4f0b-9900-9c91da75278a" (UID: "67aa9bdc-577d-4f0b-9900-9c91da75278a"). InnerVolumeSpecName "kube-api-access-xp8b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.588031 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0be369-d704-43ad-851a-c7e24798a150-kube-api-access-5nps8" (OuterVolumeSpecName: "kube-api-access-5nps8") pod "dd0be369-d704-43ad-851a-c7e24798a150" (UID: "dd0be369-d704-43ad-851a-c7e24798a150"). InnerVolumeSpecName "kube-api-access-5nps8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.612010 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-scripts" (OuterVolumeSpecName: "scripts") pod "dd0be369-d704-43ad-851a-c7e24798a150" (UID: "dd0be369-d704-43ad-851a-c7e24798a150"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.612159 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.612985 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "67aa9bdc-577d-4f0b-9900-9c91da75278a" (UID: "67aa9bdc-577d-4f0b-9900-9c91da75278a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.676462 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-run-httpd\") pod \"96125084-cfab-452e-9b96-6643e257344c\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.676547 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsvfj\" (UniqueName: \"kubernetes.io/projected/96125084-cfab-452e-9b96-6643e257344c-kube-api-access-lsvfj\") pod \"96125084-cfab-452e-9b96-6643e257344c\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.676596 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-config-data\") pod \"96125084-cfab-452e-9b96-6643e257344c\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.676733 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-log-httpd\") pod \"96125084-cfab-452e-9b96-6643e257344c\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.676862 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-scripts\") pod \"96125084-cfab-452e-9b96-6643e257344c\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.676894 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-combined-ca-bundle\") pod \"96125084-cfab-452e-9b96-6643e257344c\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.676966 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-sg-core-conf-yaml\") pod \"96125084-cfab-452e-9b96-6643e257344c\" (UID: \"96125084-cfab-452e-9b96-6643e257344c\") " Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.677453 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nps8\" (UniqueName: \"kubernetes.io/projected/dd0be369-d704-43ad-851a-c7e24798a150-kube-api-access-5nps8\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.677464 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dd0be369-d704-43ad-851a-c7e24798a150-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.677473 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.677482 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp8b6\" (UniqueName: \"kubernetes.io/projected/67aa9bdc-577d-4f0b-9900-9c91da75278a-kube-api-access-xp8b6\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.677490 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.677499 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.692480 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "96125084-cfab-452e-9b96-6643e257344c" (UID: "96125084-cfab-452e-9b96-6643e257344c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.692774 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "96125084-cfab-452e-9b96-6643e257344c" (UID: "96125084-cfab-452e-9b96-6643e257344c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.718077 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-scripts" (OuterVolumeSpecName: "scripts") pod "96125084-cfab-452e-9b96-6643e257344c" (UID: "96125084-cfab-452e-9b96-6643e257344c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.731328 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96125084-cfab-452e-9b96-6643e257344c-kube-api-access-lsvfj" (OuterVolumeSpecName: "kube-api-access-lsvfj") pod "96125084-cfab-452e-9b96-6643e257344c" (UID: "96125084-cfab-452e-9b96-6643e257344c"). InnerVolumeSpecName "kube-api-access-lsvfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.780394 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.780430 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.780439 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96125084-cfab-452e-9b96-6643e257344c-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.780448 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsvfj\" (UniqueName: \"kubernetes.io/projected/96125084-cfab-452e-9b96-6643e257344c-kube-api-access-lsvfj\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.917051 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "96125084-cfab-452e-9b96-6643e257344c" (UID: "96125084-cfab-452e-9b96-6643e257344c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:46 crc kubenswrapper[4792]: I0319 17:08:46.985098 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.020583 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.031690 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67aa9bdc-577d-4f0b-9900-9c91da75278a" (UID: "67aa9bdc-577d-4f0b-9900-9c91da75278a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.086319 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fff6h\" (UniqueName: \"kubernetes.io/projected/db1742b5-7b52-49d1-8dba-f9c27446efb2-kube-api-access-fff6h\") pod \"db1742b5-7b52-49d1-8dba-f9c27446efb2\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.086378 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data-custom\") pod \"db1742b5-7b52-49d1-8dba-f9c27446efb2\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.086447 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data\") pod \"db1742b5-7b52-49d1-8dba-f9c27446efb2\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.086551 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-combined-ca-bundle\") pod \"db1742b5-7b52-49d1-8dba-f9c27446efb2\" (UID: \"db1742b5-7b52-49d1-8dba-f9c27446efb2\") " Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.087236 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.111403 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "db1742b5-7b52-49d1-8dba-f9c27446efb2" (UID: "db1742b5-7b52-49d1-8dba-f9c27446efb2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.113400 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1742b5-7b52-49d1-8dba-f9c27446efb2-kube-api-access-fff6h" (OuterVolumeSpecName: "kube-api-access-fff6h") pod "db1742b5-7b52-49d1-8dba-f9c27446efb2" (UID: "db1742b5-7b52-49d1-8dba-f9c27446efb2"). InnerVolumeSpecName "kube-api-access-fff6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.134406 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" event={"ID":"67aa9bdc-577d-4f0b-9900-9c91da75278a","Type":"ContainerDied","Data":"d6dec32c1f72c5e69f1155e3391b2103d02dd476bc9c195d7a4ebff149182b5a"} Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.134433 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-897fbdd64-wsxgg" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.134459 4792 scope.go:117] "RemoveContainer" containerID="11f543830139381a5f20c51ac506c42fc92a2ffad701c3e4ab8b6bf987a490a3" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.156449 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.156579 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dd0be369-d704-43ad-851a-c7e24798a150","Type":"ContainerDied","Data":"a84fef8f841754ba6cf3967558098f10d4c4c27b1f3247530cb83d0b2ac4b914"} Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.172872 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.174441 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96125084-cfab-452e-9b96-6643e257344c","Type":"ContainerDied","Data":"f4e465e174c1fd45de2797153fd7eaa646a8e4dcfa0606891a44f49eacd7ab5e"} Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.176902 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" event={"ID":"db1742b5-7b52-49d1-8dba-f9c27446efb2","Type":"ContainerDied","Data":"829d0127d6b973c48c31a7734e39f5699788c037dde59ac5b8c0b672d4f03f2e"} Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.176964 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d5c5d8dc8-z7j2w" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.185993 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmhr"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.197470 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fff6h\" (UniqueName: \"kubernetes.io/projected/db1742b5-7b52-49d1-8dba-f9c27446efb2-kube-api-access-fff6h\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.197549 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.276344 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data" (OuterVolumeSpecName: "config-data") pod "67aa9bdc-577d-4f0b-9900-9c91da75278a" (UID: "67aa9bdc-577d-4f0b-9900-9c91da75278a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.300976 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa9bdc-577d-4f0b-9900-9c91da75278a-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.375158 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd0be369-d704-43ad-851a-c7e24798a150" (UID: "dd0be369-d704-43ad-851a-c7e24798a150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.375183 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db1742b5-7b52-49d1-8dba-f9c27446efb2" (UID: "db1742b5-7b52-49d1-8dba-f9c27446efb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.383988 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data" (OuterVolumeSpecName: "config-data") pod "db1742b5-7b52-49d1-8dba-f9c27446efb2" (UID: "db1742b5-7b52-49d1-8dba-f9c27446efb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.403543 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.403576 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.403590 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1742b5-7b52-49d1-8dba-f9c27446efb2-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.429128 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96125084-cfab-452e-9b96-6643e257344c" (UID: "96125084-cfab-452e-9b96-6643e257344c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.432377 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-config-data" (OuterVolumeSpecName: "config-data") pod "96125084-cfab-452e-9b96-6643e257344c" (UID: "96125084-cfab-452e-9b96-6643e257344c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.436761 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data" (OuterVolumeSpecName: "config-data") pod "dd0be369-d704-43ad-851a-c7e24798a150" (UID: "dd0be369-d704-43ad-851a-c7e24798a150"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.505935 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.506269 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0be369-d704-43ad-851a-c7e24798a150-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.506281 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96125084-cfab-452e-9b96-6643e257344c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.540321 4792 scope.go:117] "RemoveContainer" containerID="c81c68f24b41aeb2aa27803e933e534588b06e1e5cb0aa7bfc40c4ce4ddd9bf6" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.598817 4792 scope.go:117] "RemoveContainer" containerID="61659cac33f04ce90c2f57c93cc4d636b1cb113b94cda883dc8f8c03db706fe1" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.640808 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-897fbdd64-wsxgg"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.642981 4792 scope.go:117] "RemoveContainer" containerID="32b20247b645b394650638e5d327a94960c29ff55e242205af334beefa4422c0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.660461 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-897fbdd64-wsxgg"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.672605 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.679213 4792 scope.go:117] "RemoveContainer" containerID="079127b573bb771ca7ec002f865b64697e9c3f4c18d484333b33e3b5fc9ea6fd" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.688598 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.707724 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.711809 4792 scope.go:117] "RemoveContainer" containerID="9dba31a3d180846869e38480f3a2226948f195cf9625f8cea42e60fc9c6518be" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.724794 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.738769 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d5c5d8dc8-z7j2w"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.765186 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" path="/var/lib/kubelet/pods/67aa9bdc-577d-4f0b-9900-9c91da75278a/volumes" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.765940 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96125084-cfab-452e-9b96-6643e257344c" path="/var/lib/kubelet/pods/96125084-cfab-452e-9b96-6643e257344c/volumes" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.766613 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0be369-d704-43ad-851a-c7e24798a150" path="/var/lib/kubelet/pods/dd0be369-d704-43ad-851a-c7e24798a150/volumes" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.779440 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-d5c5d8dc8-z7j2w"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.787594 4792 scope.go:117] "RemoveContainer" containerID="00d05aa3b8c197aa882ec643ef4af0eac5d6f5bbf05e8371f7aa851091e9f3a8" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791129 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791647 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="ceilometer-notification-agent" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791666 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="ceilometer-notification-agent" Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791692 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" containerName="heat-cfnapi" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791698 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" containerName="heat-cfnapi" Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791712 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="ceilometer-central-agent" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791719 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="ceilometer-central-agent" Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791731 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="probe" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791737 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="probe" Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791747 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="proxy-httpd" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791753 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="proxy-httpd" Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791762 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="cinder-scheduler" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791801 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="cinder-scheduler" Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791814 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" containerName="heat-cfnapi" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791820 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" containerName="heat-cfnapi" Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791827 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="sg-core" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791834 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="sg-core" Mar 19 17:08:47 crc kubenswrapper[4792]: E0319 17:08:47.791923 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1742b5-7b52-49d1-8dba-f9c27446efb2" containerName="heat-engine" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.791930 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1742b5-7b52-49d1-8dba-f9c27446efb2" containerName="heat-engine" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792138 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" containerName="heat-cfnapi" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792151 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="ceilometer-central-agent" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792162 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="ceilometer-notification-agent" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792175 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="sg-core" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792186 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1742b5-7b52-49d1-8dba-f9c27446efb2" containerName="heat-engine" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792200 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="cinder-scheduler" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792215 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="proxy-httpd" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792229 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0be369-d704-43ad-851a-c7e24798a150" containerName="probe" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.792762 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="67aa9bdc-577d-4f0b-9900-9c91da75278a" containerName="heat-cfnapi" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.799079 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.803012 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.803029 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.815193 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.817419 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.823683 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.862940 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.873664 4792 scope.go:117] "RemoveContainer" containerID="51daa3bd7a5c20c03c0aa64aca9ae0bef3f00a152cf1dd89ef2d94cb749038a0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.906948 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.929082 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwwz4\" (UniqueName: \"kubernetes.io/projected/d56847f1-ff06-4020-a851-b79384fe6692-kube-api-access-zwwz4\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.929155 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.929214 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-config-data\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.929287 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d9a5546-9c67-4684-8efd-c6c515dcb25d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.929333 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-run-httpd\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.929400 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-log-httpd\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.930951 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-scripts\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.931327 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.932943 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.933110 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.933295 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.933429 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkvsd\" (UniqueName: \"kubernetes.io/projected/4d9a5546-9c67-4684-8efd-c6c515dcb25d-kube-api-access-pkvsd\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:47 crc kubenswrapper[4792]: I0319 17:08:47.933529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042156 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d9a5546-9c67-4684-8efd-c6c515dcb25d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-run-httpd\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042319 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d9a5546-9c67-4684-8efd-c6c515dcb25d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042379 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-log-httpd\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042442 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-scripts\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042647 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042688 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042720 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042827 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-log-httpd\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042849 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042889 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkvsd\" (UniqueName: \"kubernetes.io/projected/4d9a5546-9c67-4684-8efd-c6c515dcb25d-kube-api-access-pkvsd\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042919 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-run-httpd\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.042944 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwwz4\" (UniqueName: \"kubernetes.io/projected/d56847f1-ff06-4020-a851-b79384fe6692-kube-api-access-zwwz4\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.043021 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.043115 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-config-data\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.054320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-scripts\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.055033 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.056954 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-config-data\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.057895 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-config-data\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.060517 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.060974 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-scripts\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.064545 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.064782 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9a5546-9c67-4684-8efd-c6c515dcb25d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.068630 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkvsd\" (UniqueName: \"kubernetes.io/projected/4d9a5546-9c67-4684-8efd-c6c515dcb25d-kube-api-access-pkvsd\") pod \"cinder-scheduler-0\" (UID: \"4d9a5546-9c67-4684-8efd-c6c515dcb25d\") " pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.069673 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwwz4\" (UniqueName: \"kubernetes.io/projected/d56847f1-ff06-4020-a851-b79384fe6692-kube-api-access-zwwz4\") pod \"ceilometer-0\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.141382 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.162452 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.245165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" event={"ID":"5027af97-8929-4efd-b9e0-47736ca10da2","Type":"ContainerStarted","Data":"fa80c62ab82397f6bdd3be4d2c052621b256d0c74c90521daa5c40d85d375f1a"} Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.278602 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" podStartSLOduration=3.937532644 podStartE2EDuration="30.278578658s" podCreationTimestamp="2026-03-19 17:08:18 +0000 UTC" firstStartedPulling="2026-03-19 17:08:19.964494711 +0000 UTC m=+1663.110552241" lastFinishedPulling="2026-03-19 17:08:46.305540725 +0000 UTC m=+1689.451598255" observedRunningTime="2026-03-19 17:08:48.270261389 +0000 UTC m=+1691.416318929" watchObservedRunningTime="2026-03-19 17:08:48.278578658 +0000 UTC m=+1691.424636208" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.293899 4792 generic.go:334] "Generic (PLEG): container finished" podID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerID="3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450" exitCode=0 Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.297401 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmhr" event={"ID":"61fb6bd8-1309-411f-b8bc-c8272384de52","Type":"ContainerDied","Data":"3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450"} Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.298571 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmhr" event={"ID":"61fb6bd8-1309-411f-b8bc-c8272384de52","Type":"ContainerStarted","Data":"8a0cd9ffb8fb458a605592afeddc250b5796629a316d620c724dc0ef7a6e884c"} Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.344916 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.197:9292/healthcheck\": read tcp 10.217.0.2:50416->10.217.0.197:9292: read: connection reset by peer" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.345395 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.197:9292/healthcheck\": read tcp 10.217.0.2:50402->10.217.0.197:9292: read: connection reset by peer" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.381403 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r85rk" event={"ID":"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31","Type":"ContainerStarted","Data":"5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b"} Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.417404 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r85rk" podStartSLOduration=4.893757575 podStartE2EDuration="23.417351138s" podCreationTimestamp="2026-03-19 17:08:25 +0000 UTC" firstStartedPulling="2026-03-19 17:08:27.781199622 +0000 UTC m=+1670.927257152" lastFinishedPulling="2026-03-19 17:08:46.304793175 +0000 UTC m=+1689.450850715" observedRunningTime="2026-03-19 17:08:48.411901679 +0000 UTC m=+1691.557959219" watchObservedRunningTime="2026-03-19 17:08:48.417351138 +0000 UTC m=+1691.563408678" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.588831 4792 scope.go:117] "RemoveContainer" containerID="c3df5665c2b421922eaee727841f02a943dd42dfdeb5a547ab4d2237e0fcc0f2" Mar 19 17:08:48 crc kubenswrapper[4792]: I0319 17:08:48.739808 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:08:48 crc kubenswrapper[4792]: E0319 17:08:48.740372 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.003240 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.135351 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.198:9292/healthcheck\": read tcp 10.217.0.2:35156->10.217.0.198:9292: read: connection reset by peer" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.148045 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.198:9292/healthcheck\": read tcp 10.217.0.2:35158->10.217.0.198:9292: read: connection reset by peer" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.295750 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.440335 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerStarted","Data":"11e8d65e5af7c8b3ca209c1422ebe2e819f0c7cbe7571062cbd0a40f46c9b2db"} Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.455200 4792 generic.go:334] "Generic (PLEG): container finished" podID="da737c96-aa96-4a26-8fe5-33778519b02d" containerID="d0c26199bfce4450860cca013e45a244323038f5192444cb572f50e0cde2e9b8" exitCode=0 Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.455280 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da737c96-aa96-4a26-8fe5-33778519b02d","Type":"ContainerDied","Data":"d0c26199bfce4450860cca013e45a244323038f5192444cb572f50e0cde2e9b8"} Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.481811 4792 generic.go:334] "Generic (PLEG): container finished" podID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerID="d7642887d1fb972ae5659d67a11e7271a301b5230392701b116dad0b2817e27a" exitCode=0 Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.481906 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f69ff164-0421-4131-92f1-88b1dbbac7d3","Type":"ContainerDied","Data":"d7642887d1fb972ae5659d67a11e7271a301b5230392701b116dad0b2817e27a"} Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.485157 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d9a5546-9c67-4684-8efd-c6c515dcb25d","Type":"ContainerStarted","Data":"ca1e4abc46d254f1c51733b719d30b7a1f0624cc4932d6b186444f3d80e81a7a"} Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.628982 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.710610 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-httpd-run\") pod \"f69ff164-0421-4131-92f1-88b1dbbac7d3\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.711062 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"f69ff164-0421-4131-92f1-88b1dbbac7d3\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.711170 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-public-tls-certs\") pod \"f69ff164-0421-4131-92f1-88b1dbbac7d3\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.711192 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-scripts\") pod \"f69ff164-0421-4131-92f1-88b1dbbac7d3\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.711286 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-combined-ca-bundle\") pod \"f69ff164-0421-4131-92f1-88b1dbbac7d3\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.711370 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-logs\") pod \"f69ff164-0421-4131-92f1-88b1dbbac7d3\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.711492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-config-data\") pod \"f69ff164-0421-4131-92f1-88b1dbbac7d3\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.711521 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5mr9\" (UniqueName: \"kubernetes.io/projected/f69ff164-0421-4131-92f1-88b1dbbac7d3-kube-api-access-z5mr9\") pod \"f69ff164-0421-4131-92f1-88b1dbbac7d3\" (UID: \"f69ff164-0421-4131-92f1-88b1dbbac7d3\") " Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.713450 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f69ff164-0421-4131-92f1-88b1dbbac7d3" (UID: "f69ff164-0421-4131-92f1-88b1dbbac7d3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.717508 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-logs" (OuterVolumeSpecName: "logs") pod "f69ff164-0421-4131-92f1-88b1dbbac7d3" (UID: "f69ff164-0421-4131-92f1-88b1dbbac7d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.724005 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.724035 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f69ff164-0421-4131-92f1-88b1dbbac7d3-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.755139 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-scripts" (OuterVolumeSpecName: "scripts") pod "f69ff164-0421-4131-92f1-88b1dbbac7d3" (UID: "f69ff164-0421-4131-92f1-88b1dbbac7d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.762607 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69ff164-0421-4131-92f1-88b1dbbac7d3-kube-api-access-z5mr9" (OuterVolumeSpecName: "kube-api-access-z5mr9") pod "f69ff164-0421-4131-92f1-88b1dbbac7d3" (UID: "f69ff164-0421-4131-92f1-88b1dbbac7d3"). InnerVolumeSpecName "kube-api-access-z5mr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.831656 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1742b5-7b52-49d1-8dba-f9c27446efb2" path="/var/lib/kubelet/pods/db1742b5-7b52-49d1-8dba-f9c27446efb2/volumes" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.896178 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.896607 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5mr9\" (UniqueName: \"kubernetes.io/projected/f69ff164-0421-4131-92f1-88b1dbbac7d3-kube-api-access-z5mr9\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.918030 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f69ff164-0421-4131-92f1-88b1dbbac7d3" (UID: "f69ff164-0421-4131-92f1-88b1dbbac7d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:49 crc kubenswrapper[4792]: I0319 17:08:49.968888 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-config-data" (OuterVolumeSpecName: "config-data") pod "f69ff164-0421-4131-92f1-88b1dbbac7d3" (UID: "f69ff164-0421-4131-92f1-88b1dbbac7d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.001683 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.001712 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.020614 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808" (OuterVolumeSpecName: "glance") pod "f69ff164-0421-4131-92f1-88b1dbbac7d3" (UID: "f69ff164-0421-4131-92f1-88b1dbbac7d3"). InnerVolumeSpecName "pvc-f406fea9-b42c-4c85-920d-4d104deeb808". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.090274 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f69ff164-0421-4131-92f1-88b1dbbac7d3" (UID: "f69ff164-0421-4131-92f1-88b1dbbac7d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.104550 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") on node \"crc\" " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.104608 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69ff164-0421-4131-92f1-88b1dbbac7d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.142795 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.143172 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f406fea9-b42c-4c85-920d-4d104deeb808" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808") on node "crc" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.208440 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.305855 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.415326 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-httpd-run\") pod \"da737c96-aa96-4a26-8fe5-33778519b02d\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.415667 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjbtt\" (UniqueName: \"kubernetes.io/projected/da737c96-aa96-4a26-8fe5-33778519b02d-kube-api-access-cjbtt\") pod \"da737c96-aa96-4a26-8fe5-33778519b02d\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.415880 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-scripts\") pod \"da737c96-aa96-4a26-8fe5-33778519b02d\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.415934 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-logs\") pod \"da737c96-aa96-4a26-8fe5-33778519b02d\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.416026 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-config-data\") pod \"da737c96-aa96-4a26-8fe5-33778519b02d\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.416057 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-combined-ca-bundle\") pod \"da737c96-aa96-4a26-8fe5-33778519b02d\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.416191 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "da737c96-aa96-4a26-8fe5-33778519b02d" (UID: "da737c96-aa96-4a26-8fe5-33778519b02d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.416540 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-logs" (OuterVolumeSpecName: "logs") pod "da737c96-aa96-4a26-8fe5-33778519b02d" (UID: "da737c96-aa96-4a26-8fe5-33778519b02d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.416771 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"da737c96-aa96-4a26-8fe5-33778519b02d\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.416921 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-internal-tls-certs\") pod \"da737c96-aa96-4a26-8fe5-33778519b02d\" (UID: \"da737c96-aa96-4a26-8fe5-33778519b02d\") " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.417472 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.417489 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da737c96-aa96-4a26-8fe5-33778519b02d-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.429358 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da737c96-aa96-4a26-8fe5-33778519b02d-kube-api-access-cjbtt" (OuterVolumeSpecName: "kube-api-access-cjbtt") pod "da737c96-aa96-4a26-8fe5-33778519b02d" (UID: "da737c96-aa96-4a26-8fe5-33778519b02d"). InnerVolumeSpecName "kube-api-access-cjbtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.432118 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-scripts" (OuterVolumeSpecName: "scripts") pod "da737c96-aa96-4a26-8fe5-33778519b02d" (UID: "da737c96-aa96-4a26-8fe5-33778519b02d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.483915 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82" (OuterVolumeSpecName: "glance") pod "da737c96-aa96-4a26-8fe5-33778519b02d" (UID: "da737c96-aa96-4a26-8fe5-33778519b02d"). InnerVolumeSpecName "pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.492855 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da737c96-aa96-4a26-8fe5-33778519b02d" (UID: "da737c96-aa96-4a26-8fe5-33778519b02d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.525489 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjbtt\" (UniqueName: \"kubernetes.io/projected/da737c96-aa96-4a26-8fe5-33778519b02d-kube-api-access-cjbtt\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.525599 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.525612 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.525647 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") on node \"crc\" " Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.588748 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.588982 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82") on node "crc" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.609236 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da737c96-aa96-4a26-8fe5-33778519b02d","Type":"ContainerDied","Data":"a99713fdb1f2cf54753375ca4fb6fc97dc9958f68d984b442bf62ec0f589f138"} Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.609293 4792 scope.go:117] "RemoveContainer" containerID="d0c26199bfce4450860cca013e45a244323038f5192444cb572f50e0cde2e9b8" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.609444 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.630109 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.642529 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-config-data" (OuterVolumeSpecName: "config-data") pod "da737c96-aa96-4a26-8fe5-33778519b02d" (UID: "da737c96-aa96-4a26-8fe5-33778519b02d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.662449 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmhr" event={"ID":"61fb6bd8-1309-411f-b8bc-c8272384de52","Type":"ContainerStarted","Data":"340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8"} Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.688447 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f69ff164-0421-4131-92f1-88b1dbbac7d3","Type":"ContainerDied","Data":"791a84281596446464f2eaab952f2e4b9df221b5cecae9a536aff627e5b2e9a6"} Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.688589 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.720722 4792 scope.go:117] "RemoveContainer" containerID="482f902e126b3cd1ae01a7f21ff6a7aacd4f8c12e2b1de995d82188875ae87cb" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.732208 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.756908 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.764997 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da737c96-aa96-4a26-8fe5-33778519b02d" (UID: "da737c96-aa96-4a26-8fe5-33778519b02d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.769960 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.789471 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:08:50 crc kubenswrapper[4792]: E0319 17:08:50.790002 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-httpd" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.790019 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-httpd" Mar 19 17:08:50 crc kubenswrapper[4792]: E0319 17:08:50.790033 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-log" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.790039 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-log" Mar 19 17:08:50 crc kubenswrapper[4792]: E0319 17:08:50.790075 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-log" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.790081 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-log" Mar 19 17:08:50 crc kubenswrapper[4792]: E0319 17:08:50.790092 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-httpd" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.790097 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-httpd" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.790302 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-httpd" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.790323 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-httpd" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.790334 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" containerName="glance-log" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.790349 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" containerName="glance-log" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.791530 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.800783 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.800914 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.827050 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.836328 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da737c96-aa96-4a26-8fe5-33778519b02d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.848682 4792 scope.go:117] "RemoveContainer" containerID="d7642887d1fb972ae5659d67a11e7271a301b5230392701b116dad0b2817e27a" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.918655 4792 scope.go:117] "RemoveContainer" containerID="0202957be291e371b746b6e8b63164def65e04a55e4792e0048049b0a584e935" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.938375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e42385-c657-4f83-9f18-82209d504136-logs\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.938466 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prpxm\" (UniqueName: \"kubernetes.io/projected/d2e42385-c657-4f83-9f18-82209d504136-kube-api-access-prpxm\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.938501 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.938534 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-scripts\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.938564 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.938595 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.938610 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-config-data\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.938679 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2e42385-c657-4f83-9f18-82209d504136-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.950231 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:08:50 crc kubenswrapper[4792]: I0319 17:08:50.974670 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:50.999446 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.014897 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.024052 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.024714 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.026270 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.040408 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2e42385-c657-4f83-9f18-82209d504136-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.040478 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e42385-c657-4f83-9f18-82209d504136-logs\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.040539 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prpxm\" (UniqueName: \"kubernetes.io/projected/d2e42385-c657-4f83-9f18-82209d504136-kube-api-access-prpxm\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.040572 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.040605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-scripts\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.040635 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.040669 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.040686 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-config-data\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.044683 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.045896 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-config-data\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.050894 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.051348 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2e42385-c657-4f83-9f18-82209d504136-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.051490 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e42385-c657-4f83-9f18-82209d504136-logs\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.052806 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e42385-c657-4f83-9f18-82209d504136-scripts\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.056699 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.056763 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4b43246ff1db308c2bef8dd59bebb849755d71eee7e8415d63550c78edf7118/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.059039 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.077383 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prpxm\" (UniqueName: \"kubernetes.io/projected/d2e42385-c657-4f83-9f18-82209d504136-kube-api-access-prpxm\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.138116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f406fea9-b42c-4c85-920d-4d104deeb808\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f406fea9-b42c-4c85-920d-4d104deeb808\") pod \"glance-default-external-api-0\" (UID: \"d2e42385-c657-4f83-9f18-82209d504136\") " pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.142443 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.142543 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.142651 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn6vc\" (UniqueName: \"kubernetes.io/projected/641b598a-d3b7-46bf-a1cc-aecf296a0afc-kube-api-access-sn6vc\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.142726 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.142764 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/641b598a-d3b7-46bf-a1cc-aecf296a0afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.142807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/641b598a-d3b7-46bf-a1cc-aecf296a0afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.142922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.143199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.148461 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.246541 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn6vc\" (UniqueName: \"kubernetes.io/projected/641b598a-d3b7-46bf-a1cc-aecf296a0afc-kube-api-access-sn6vc\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.246922 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.246951 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/641b598a-d3b7-46bf-a1cc-aecf296a0afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.246974 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/641b598a-d3b7-46bf-a1cc-aecf296a0afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.247029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.247131 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.247163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.247219 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.250402 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/641b598a-d3b7-46bf-a1cc-aecf296a0afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.251360 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/641b598a-d3b7-46bf-a1cc-aecf296a0afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.255471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.258515 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.258562 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46aca6488ccc0c45ce72b5d671de4e2f1b9d18700c0b7fecdb3a92995140584f/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.260713 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.275678 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.275888 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641b598a-d3b7-46bf-a1cc-aecf296a0afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.279562 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn6vc\" (UniqueName: \"kubernetes.io/projected/641b598a-d3b7-46bf-a1cc-aecf296a0afc-kube-api-access-sn6vc\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.360655 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c766bf56-804a-4bdb-9a73-1cdabcc70d82\") pod \"glance-default-internal-api-0\" (UID: \"641b598a-d3b7-46bf-a1cc-aecf296a0afc\") " pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.661741 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.807211 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da737c96-aa96-4a26-8fe5-33778519b02d" path="/var/lib/kubelet/pods/da737c96-aa96-4a26-8fe5-33778519b02d/volumes" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.808141 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69ff164-0421-4131-92f1-88b1dbbac7d3" path="/var/lib/kubelet/pods/f69ff164-0421-4131-92f1-88b1dbbac7d3/volumes" Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.815120 4792 generic.go:334] "Generic (PLEG): container finished" podID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerID="340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8" exitCode=0 Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.815228 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmhr" event={"ID":"61fb6bd8-1309-411f-b8bc-c8272384de52","Type":"ContainerDied","Data":"340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8"} Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.825522 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerStarted","Data":"0d13c388bf16b5850ae5f088e52a018b436a22574a39ede8159d298a56c2b3a5"} Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.856019 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d9a5546-9c67-4684-8efd-c6c515dcb25d","Type":"ContainerStarted","Data":"1f3c7aa25ec1865b44a3c60a18ddad579b22374eb885c5ba2368c7484383bd08"} Mar 19 17:08:51 crc kubenswrapper[4792]: I0319 17:08:51.892964 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 19 17:08:52 crc kubenswrapper[4792]: I0319 17:08:52.479821 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 19 17:08:52 crc kubenswrapper[4792]: W0319 17:08:52.504406 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod641b598a_d3b7_46bf_a1cc_aecf296a0afc.slice/crio-8453e79d5cb8001c61ee0c8e0223c3af95558d2bccd0b5d2ea9350519fd0b78a WatchSource:0}: Error finding container 8453e79d5cb8001c61ee0c8e0223c3af95558d2bccd0b5d2ea9350519fd0b78a: Status 404 returned error can't find the container with id 8453e79d5cb8001c61ee0c8e0223c3af95558d2bccd0b5d2ea9350519fd0b78a Mar 19 17:08:52 crc kubenswrapper[4792]: I0319 17:08:52.893062 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerStarted","Data":"b1debf73ed0650f2a5f3ed8f389791b631455d1d1527fe673753b94e76146f1f"} Mar 19 17:08:52 crc kubenswrapper[4792]: I0319 17:08:52.895325 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2e42385-c657-4f83-9f18-82209d504136","Type":"ContainerStarted","Data":"db2ea1da6305d860a083d66aaf796116b2a83eafe11ddce571c0ff508176259f"} Mar 19 17:08:52 crc kubenswrapper[4792]: I0319 17:08:52.896665 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"641b598a-d3b7-46bf-a1cc-aecf296a0afc","Type":"ContainerStarted","Data":"8453e79d5cb8001c61ee0c8e0223c3af95558d2bccd0b5d2ea9350519fd0b78a"} Mar 19 17:08:52 crc kubenswrapper[4792]: I0319 17:08:52.900276 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d9a5546-9c67-4684-8efd-c6c515dcb25d","Type":"ContainerStarted","Data":"050089c93f5c6f98858fc8be9fea30c73044ff9a704565f1c77b0acd27fec0dc"} Mar 19 17:08:52 crc kubenswrapper[4792]: I0319 17:08:52.905705 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmhr" event={"ID":"61fb6bd8-1309-411f-b8bc-c8272384de52","Type":"ContainerStarted","Data":"330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef"} Mar 19 17:08:52 crc kubenswrapper[4792]: I0319 17:08:52.926524 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.926501311 podStartE2EDuration="5.926501311s" podCreationTimestamp="2026-03-19 17:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:52.920204178 +0000 UTC m=+1696.066261718" watchObservedRunningTime="2026-03-19 17:08:52.926501311 +0000 UTC m=+1696.072558851" Mar 19 17:08:52 crc kubenswrapper[4792]: I0319 17:08:52.941145 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2bmhr" podStartSLOduration=9.972364523 podStartE2EDuration="13.941126852s" podCreationTimestamp="2026-03-19 17:08:39 +0000 UTC" firstStartedPulling="2026-03-19 17:08:48.318972897 +0000 UTC m=+1691.465030437" lastFinishedPulling="2026-03-19 17:08:52.287735226 +0000 UTC m=+1695.433792766" observedRunningTime="2026-03-19 17:08:52.935749065 +0000 UTC m=+1696.081806605" watchObservedRunningTime="2026-03-19 17:08:52.941126852 +0000 UTC m=+1696.087184392" Mar 19 17:08:53 crc kubenswrapper[4792]: I0319 17:08:53.163685 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 19 17:08:53 crc kubenswrapper[4792]: I0319 17:08:53.961251 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2e42385-c657-4f83-9f18-82209d504136","Type":"ContainerStarted","Data":"11a8b30b4fdada6e89715cc222837d34f127546bce210b0ac29b99a3eaacdd59"} Mar 19 17:08:54 crc kubenswrapper[4792]: I0319 17:08:54.001589 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"641b598a-d3b7-46bf-a1cc-aecf296a0afc","Type":"ContainerStarted","Data":"a43c9d4f17d8feed8cb83147fb2422cbfc3e25f8b20f5a708abe8d0450cb4ee3"} Mar 19 17:08:54 crc kubenswrapper[4792]: I0319 17:08:54.036972 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerStarted","Data":"dd260648e24d23ad85a98a24ec7fc1dc6740f4a633b1da050f2c54c87bf3421a"} Mar 19 17:08:55 crc kubenswrapper[4792]: I0319 17:08:55.050175 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2e42385-c657-4f83-9f18-82209d504136","Type":"ContainerStarted","Data":"5671336ccd01303cfcfe4a95b8c83bbfdb372b3c5d5eb887dd56bedca2f5826d"} Mar 19 17:08:55 crc kubenswrapper[4792]: I0319 17:08:55.054182 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"641b598a-d3b7-46bf-a1cc-aecf296a0afc","Type":"ContainerStarted","Data":"9123f6a2cff960b85caa4f93180f6b21ffe3d07b17b10cac74cf0582d0cd8d13"} Mar 19 17:08:55 crc kubenswrapper[4792]: I0319 17:08:55.080153 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.080131381 podStartE2EDuration="5.080131381s" podCreationTimestamp="2026-03-19 17:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:55.074178678 +0000 UTC m=+1698.220236238" watchObservedRunningTime="2026-03-19 17:08:55.080131381 +0000 UTC m=+1698.226188921" Mar 19 17:08:55 crc kubenswrapper[4792]: I0319 17:08:55.997298 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:55 crc kubenswrapper[4792]: I0319 17:08:55.998724 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:08:56 crc kubenswrapper[4792]: E0319 17:08:56.740969 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="d7885af7-09a3-4ea4-b59f-2de96f42fd0b" Mar 19 17:08:56 crc kubenswrapper[4792]: I0319 17:08:56.789197 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.789174647 podStartE2EDuration="6.789174647s" podCreationTimestamp="2026-03-19 17:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:08:55.103627536 +0000 UTC m=+1698.249685076" watchObservedRunningTime="2026-03-19 17:08:56.789174647 +0000 UTC m=+1699.935232187" Mar 19 17:08:57 crc kubenswrapper[4792]: I0319 17:08:57.056296 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r85rk" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="registry-server" probeResult="failure" output=< Mar 19 17:08:57 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:08:57 crc kubenswrapper[4792]: > Mar 19 17:08:57 crc kubenswrapper[4792]: I0319 17:08:57.081239 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerStarted","Data":"c66629016a6691354e540cf9c0e4e4e4b7f4508262b7de34aaec9306ed81224c"} Mar 19 17:08:57 crc kubenswrapper[4792]: I0319 17:08:57.081383 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="ceilometer-central-agent" containerID="cri-o://0d13c388bf16b5850ae5f088e52a018b436a22574a39ede8159d298a56c2b3a5" gracePeriod=30 Mar 19 17:08:57 crc kubenswrapper[4792]: I0319 17:08:57.081421 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:08:57 crc kubenswrapper[4792]: I0319 17:08:57.081479 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="sg-core" containerID="cri-o://dd260648e24d23ad85a98a24ec7fc1dc6740f4a633b1da050f2c54c87bf3421a" gracePeriod=30 Mar 19 17:08:57 crc kubenswrapper[4792]: I0319 17:08:57.081567 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="ceilometer-notification-agent" containerID="cri-o://b1debf73ed0650f2a5f3ed8f389791b631455d1d1527fe673753b94e76146f1f" gracePeriod=30 Mar 19 17:08:57 crc kubenswrapper[4792]: I0319 17:08:57.081525 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="proxy-httpd" containerID="cri-o://c66629016a6691354e540cf9c0e4e4e4b7f4508262b7de34aaec9306ed81224c" gracePeriod=30 Mar 19 17:08:57 crc kubenswrapper[4792]: I0319 17:08:57.112852 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.205411592 podStartE2EDuration="10.112828852s" podCreationTimestamp="2026-03-19 17:08:47 +0000 UTC" firstStartedPulling="2026-03-19 17:08:49.073577642 +0000 UTC m=+1692.219635182" lastFinishedPulling="2026-03-19 17:08:55.980994902 +0000 UTC m=+1699.127052442" observedRunningTime="2026-03-19 17:08:57.099020173 +0000 UTC m=+1700.245077713" watchObservedRunningTime="2026-03-19 17:08:57.112828852 +0000 UTC m=+1700.258886412" Mar 19 17:08:58 crc kubenswrapper[4792]: I0319 17:08:58.095234 4792 generic.go:334] "Generic (PLEG): container finished" podID="d56847f1-ff06-4020-a851-b79384fe6692" containerID="c66629016a6691354e540cf9c0e4e4e4b7f4508262b7de34aaec9306ed81224c" exitCode=0 Mar 19 17:08:58 crc kubenswrapper[4792]: I0319 17:08:58.095277 4792 generic.go:334] "Generic (PLEG): container finished" podID="d56847f1-ff06-4020-a851-b79384fe6692" containerID="dd260648e24d23ad85a98a24ec7fc1dc6740f4a633b1da050f2c54c87bf3421a" exitCode=2 Mar 19 17:08:58 crc kubenswrapper[4792]: I0319 17:08:58.095293 4792 generic.go:334] "Generic (PLEG): container finished" podID="d56847f1-ff06-4020-a851-b79384fe6692" containerID="b1debf73ed0650f2a5f3ed8f389791b631455d1d1527fe673753b94e76146f1f" exitCode=0 Mar 19 17:08:58 crc kubenswrapper[4792]: I0319 17:08:58.095293 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerDied","Data":"c66629016a6691354e540cf9c0e4e4e4b7f4508262b7de34aaec9306ed81224c"} Mar 19 17:08:58 crc kubenswrapper[4792]: I0319 17:08:58.095333 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerDied","Data":"dd260648e24d23ad85a98a24ec7fc1dc6740f4a633b1da050f2c54c87bf3421a"} Mar 19 17:08:58 crc kubenswrapper[4792]: I0319 17:08:58.095347 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerDied","Data":"b1debf73ed0650f2a5f3ed8f389791b631455d1d1527fe673753b94e76146f1f"} Mar 19 17:08:58 crc kubenswrapper[4792]: I0319 17:08:58.402448 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 19 17:08:59 crc kubenswrapper[4792]: I0319 17:08:59.481975 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:59 crc kubenswrapper[4792]: I0319 17:08:59.482463 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:08:59 crc kubenswrapper[4792]: I0319 17:08:59.543519 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:09:00 crc kubenswrapper[4792]: I0319 17:09:00.164803 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:09:00 crc kubenswrapper[4792]: I0319 17:09:00.220167 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmhr"] Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.149823 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.149895 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.190140 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.199552 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.662070 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.662116 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.716307 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.725184 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 19 17:09:01 crc kubenswrapper[4792]: I0319 17:09:01.740432 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:09:01 crc kubenswrapper[4792]: E0319 17:09:01.740659 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.140797 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2bmhr" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerName="registry-server" containerID="cri-o://330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef" gracePeriod=2 Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.142131 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.142166 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.142180 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.142195 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.717313 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.890122 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-utilities\") pod \"61fb6bd8-1309-411f-b8bc-c8272384de52\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.890188 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwgft\" (UniqueName: \"kubernetes.io/projected/61fb6bd8-1309-411f-b8bc-c8272384de52-kube-api-access-mwgft\") pod \"61fb6bd8-1309-411f-b8bc-c8272384de52\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.890218 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-catalog-content\") pod \"61fb6bd8-1309-411f-b8bc-c8272384de52\" (UID: \"61fb6bd8-1309-411f-b8bc-c8272384de52\") " Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.891064 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-utilities" (OuterVolumeSpecName: "utilities") pod "61fb6bd8-1309-411f-b8bc-c8272384de52" (UID: "61fb6bd8-1309-411f-b8bc-c8272384de52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.899784 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fb6bd8-1309-411f-b8bc-c8272384de52-kube-api-access-mwgft" (OuterVolumeSpecName: "kube-api-access-mwgft") pod "61fb6bd8-1309-411f-b8bc-c8272384de52" (UID: "61fb6bd8-1309-411f-b8bc-c8272384de52"). InnerVolumeSpecName "kube-api-access-mwgft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.921590 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61fb6bd8-1309-411f-b8bc-c8272384de52" (UID: "61fb6bd8-1309-411f-b8bc-c8272384de52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.992476 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.992510 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwgft\" (UniqueName: \"kubernetes.io/projected/61fb6bd8-1309-411f-b8bc-c8272384de52-kube-api-access-mwgft\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:02 crc kubenswrapper[4792]: I0319 17:09:02.992521 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fb6bd8-1309-411f-b8bc-c8272384de52-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.156700 4792 generic.go:334] "Generic (PLEG): container finished" podID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerID="330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef" exitCode=0 Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.156809 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmhr" event={"ID":"61fb6bd8-1309-411f-b8bc-c8272384de52","Type":"ContainerDied","Data":"330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef"} Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.156853 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmhr" event={"ID":"61fb6bd8-1309-411f-b8bc-c8272384de52","Type":"ContainerDied","Data":"8a0cd9ffb8fb458a605592afeddc250b5796629a316d620c724dc0ef7a6e884c"} Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.156870 4792 scope.go:117] "RemoveContainer" containerID="330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.156868 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bmhr" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.212596 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmhr"] Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.219988 4792 scope.go:117] "RemoveContainer" containerID="340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.224585 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmhr"] Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.264071 4792 scope.go:117] "RemoveContainer" containerID="3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.333288 4792 scope.go:117] "RemoveContainer" containerID="330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef" Mar 19 17:09:03 crc kubenswrapper[4792]: E0319 17:09:03.333649 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef\": container with ID starting with 330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef not found: ID does not exist" containerID="330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.333680 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef"} err="failed to get container status \"330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef\": rpc error: code = NotFound desc = could not find container \"330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef\": container with ID starting with 330f2a01a680143194283f8030289b389edef71b81a5b2785fcf71de4ef4ebef not found: ID does not exist" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.333705 4792 scope.go:117] "RemoveContainer" containerID="340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8" Mar 19 17:09:03 crc kubenswrapper[4792]: E0319 17:09:03.333980 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8\": container with ID starting with 340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8 not found: ID does not exist" containerID="340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.334000 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8"} err="failed to get container status \"340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8\": rpc error: code = NotFound desc = could not find container \"340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8\": container with ID starting with 340e908515e662b81fc72b566a28e5df3bae0e368bc52227e1cb363238337bf8 not found: ID does not exist" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.334013 4792 scope.go:117] "RemoveContainer" containerID="3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450" Mar 19 17:09:03 crc kubenswrapper[4792]: E0319 17:09:03.334188 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450\": container with ID starting with 3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450 not found: ID does not exist" containerID="3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.334232 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450"} err="failed to get container status \"3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450\": rpc error: code = NotFound desc = could not find container \"3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450\": container with ID starting with 3c74b082c23d97cb84355e7f6fc59bf3363c3d216f62f90c29b17c0c015f0450 not found: ID does not exist" Mar 19 17:09:03 crc kubenswrapper[4792]: I0319 17:09:03.823295 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" path="/var/lib/kubelet/pods/61fb6bd8-1309-411f-b8bc-c8272384de52/volumes" Mar 19 17:09:04 crc kubenswrapper[4792]: I0319 17:09:04.171292 4792 generic.go:334] "Generic (PLEG): container finished" podID="5027af97-8929-4efd-b9e0-47736ca10da2" containerID="fa80c62ab82397f6bdd3be4d2c052621b256d0c74c90521daa5c40d85d375f1a" exitCode=0 Mar 19 17:09:04 crc kubenswrapper[4792]: I0319 17:09:04.171337 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" event={"ID":"5027af97-8929-4efd-b9e0-47736ca10da2","Type":"ContainerDied","Data":"fa80c62ab82397f6bdd3be4d2c052621b256d0c74c90521daa5c40d85d375f1a"} Mar 19 17:09:04 crc kubenswrapper[4792]: I0319 17:09:04.533608 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 19 17:09:04 crc kubenswrapper[4792]: I0319 17:09:04.533966 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 17:09:04 crc kubenswrapper[4792]: I0319 17:09:04.536115 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 19 17:09:04 crc kubenswrapper[4792]: I0319 17:09:04.542755 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 19 17:09:04 crc kubenswrapper[4792]: I0319 17:09:04.542902 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 17:09:04 crc kubenswrapper[4792]: I0319 17:09:04.596386 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.184451 4792 generic.go:334] "Generic (PLEG): container finished" podID="d56847f1-ff06-4020-a851-b79384fe6692" containerID="0d13c388bf16b5850ae5f088e52a018b436a22574a39ede8159d298a56c2b3a5" exitCode=0 Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.184536 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerDied","Data":"0d13c388bf16b5850ae5f088e52a018b436a22574a39ede8159d298a56c2b3a5"} Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.184886 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56847f1-ff06-4020-a851-b79384fe6692","Type":"ContainerDied","Data":"11e8d65e5af7c8b3ca209c1422ebe2e819f0c7cbe7571062cbd0a40f46c9b2db"} Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.184902 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e8d65e5af7c8b3ca209c1422ebe2e819f0c7cbe7571062cbd0a40f46c9b2db" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.236454 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.355710 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-run-httpd\") pod \"d56847f1-ff06-4020-a851-b79384fe6692\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.355763 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-combined-ca-bundle\") pod \"d56847f1-ff06-4020-a851-b79384fe6692\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.355838 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-log-httpd\") pod \"d56847f1-ff06-4020-a851-b79384fe6692\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.356023 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-config-data\") pod \"d56847f1-ff06-4020-a851-b79384fe6692\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.356059 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-sg-core-conf-yaml\") pod \"d56847f1-ff06-4020-a851-b79384fe6692\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.356115 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-scripts\") pod \"d56847f1-ff06-4020-a851-b79384fe6692\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.356206 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwwz4\" (UniqueName: \"kubernetes.io/projected/d56847f1-ff06-4020-a851-b79384fe6692-kube-api-access-zwwz4\") pod \"d56847f1-ff06-4020-a851-b79384fe6692\" (UID: \"d56847f1-ff06-4020-a851-b79384fe6692\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.358145 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d56847f1-ff06-4020-a851-b79384fe6692" (UID: "d56847f1-ff06-4020-a851-b79384fe6692"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.360247 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d56847f1-ff06-4020-a851-b79384fe6692" (UID: "d56847f1-ff06-4020-a851-b79384fe6692"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.364161 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56847f1-ff06-4020-a851-b79384fe6692-kube-api-access-zwwz4" (OuterVolumeSpecName: "kube-api-access-zwwz4") pod "d56847f1-ff06-4020-a851-b79384fe6692" (UID: "d56847f1-ff06-4020-a851-b79384fe6692"). InnerVolumeSpecName "kube-api-access-zwwz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.411156 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-scripts" (OuterVolumeSpecName: "scripts") pod "d56847f1-ff06-4020-a851-b79384fe6692" (UID: "d56847f1-ff06-4020-a851-b79384fe6692"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.433085 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d56847f1-ff06-4020-a851-b79384fe6692" (UID: "d56847f1-ff06-4020-a851-b79384fe6692"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.459670 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.459706 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.459718 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwwz4\" (UniqueName: \"kubernetes.io/projected/d56847f1-ff06-4020-a851-b79384fe6692-kube-api-access-zwwz4\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.459729 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.459743 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56847f1-ff06-4020-a851-b79384fe6692-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.485290 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d56847f1-ff06-4020-a851-b79384fe6692" (UID: "d56847f1-ff06-4020-a851-b79384fe6692"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.517782 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.546493 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-config-data" (OuterVolumeSpecName: "config-data") pod "d56847f1-ff06-4020-a851-b79384fe6692" (UID: "d56847f1-ff06-4020-a851-b79384fe6692"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.561903 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.561935 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56847f1-ff06-4020-a851-b79384fe6692-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.663724 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-scripts\") pod \"5027af97-8929-4efd-b9e0-47736ca10da2\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.663868 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-combined-ca-bundle\") pod \"5027af97-8929-4efd-b9e0-47736ca10da2\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.663916 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-config-data\") pod \"5027af97-8929-4efd-b9e0-47736ca10da2\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.664102 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxc6l\" (UniqueName: \"kubernetes.io/projected/5027af97-8929-4efd-b9e0-47736ca10da2-kube-api-access-rxc6l\") pod \"5027af97-8929-4efd-b9e0-47736ca10da2\" (UID: \"5027af97-8929-4efd-b9e0-47736ca10da2\") " Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.668069 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5027af97-8929-4efd-b9e0-47736ca10da2-kube-api-access-rxc6l" (OuterVolumeSpecName: "kube-api-access-rxc6l") pod "5027af97-8929-4efd-b9e0-47736ca10da2" (UID: "5027af97-8929-4efd-b9e0-47736ca10da2"). InnerVolumeSpecName "kube-api-access-rxc6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.669005 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-scripts" (OuterVolumeSpecName: "scripts") pod "5027af97-8929-4efd-b9e0-47736ca10da2" (UID: "5027af97-8929-4efd-b9e0-47736ca10da2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.692120 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5027af97-8929-4efd-b9e0-47736ca10da2" (UID: "5027af97-8929-4efd-b9e0-47736ca10da2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.706007 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-config-data" (OuterVolumeSpecName: "config-data") pod "5027af97-8929-4efd-b9e0-47736ca10da2" (UID: "5027af97-8929-4efd-b9e0-47736ca10da2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.768486 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxc6l\" (UniqueName: \"kubernetes.io/projected/5027af97-8929-4efd-b9e0-47736ca10da2-kube-api-access-rxc6l\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.768520 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.768532 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:05 crc kubenswrapper[4792]: I0319 17:09:05.768542 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5027af97-8929-4efd-b9e0-47736ca10da2-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.195411 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.195594 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.195608 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-v9mgf" event={"ID":"5027af97-8929-4efd-b9e0-47736ca10da2","Type":"ContainerDied","Data":"570712a2edc1a0dbee1fcca92786feda391c79ea83b0ada395125a9eae12a450"} Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.197606 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570712a2edc1a0dbee1fcca92786feda391c79ea83b0ada395125a9eae12a450" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.226199 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.240234 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268017 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:06 crc kubenswrapper[4792]: E0319 17:09:06.268443 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerName="registry-server" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268460 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerName="registry-server" Mar 19 17:09:06 crc kubenswrapper[4792]: E0319 17:09:06.268492 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="ceilometer-central-agent" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268499 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="ceilometer-central-agent" Mar 19 17:09:06 crc kubenswrapper[4792]: E0319 17:09:06.268508 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="sg-core" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268515 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="sg-core" Mar 19 17:09:06 crc kubenswrapper[4792]: E0319 17:09:06.268526 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="ceilometer-notification-agent" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268532 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="ceilometer-notification-agent" Mar 19 17:09:06 crc kubenswrapper[4792]: E0319 17:09:06.268547 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5027af97-8929-4efd-b9e0-47736ca10da2" containerName="nova-cell0-conductor-db-sync" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268552 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5027af97-8929-4efd-b9e0-47736ca10da2" containerName="nova-cell0-conductor-db-sync" Mar 19 17:09:06 crc kubenswrapper[4792]: E0319 17:09:06.268559 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerName="extract-utilities" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268566 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerName="extract-utilities" Mar 19 17:09:06 crc kubenswrapper[4792]: E0319 17:09:06.268576 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerName="extract-content" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268581 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerName="extract-content" Mar 19 17:09:06 crc kubenswrapper[4792]: E0319 17:09:06.268595 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="proxy-httpd" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268601 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="proxy-httpd" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268802 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5027af97-8929-4efd-b9e0-47736ca10da2" containerName="nova-cell0-conductor-db-sync" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268814 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="sg-core" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268939 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="proxy-httpd" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268947 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fb6bd8-1309-411f-b8bc-c8272384de52" containerName="registry-server" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.268964 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="ceilometer-notification-agent" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.269058 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56847f1-ff06-4020-a851-b79384fe6692" containerName="ceilometer-central-agent" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.275974 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.278345 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.278449 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.290415 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.372764 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.374186 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.378113 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.380120 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8x4rw" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.382544 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-scripts\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.382597 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.382687 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-config-data\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.382741 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrnvm\" (UniqueName: \"kubernetes.io/projected/ecbc1bfe-8eeb-4253-8403-325e458f7a52-kube-api-access-hrnvm\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.382764 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-run-httpd\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.382807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-log-httpd\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.382860 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.394606 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.485105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.485564 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8l9l\" (UniqueName: \"kubernetes.io/projected/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-kube-api-access-z8l9l\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.485791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-config-data\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.485937 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.486058 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrnvm\" (UniqueName: \"kubernetes.io/projected/ecbc1bfe-8eeb-4253-8403-325e458f7a52-kube-api-access-hrnvm\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.486151 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-run-httpd\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.486309 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-log-httpd\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.486448 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.486495 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.486738 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-scripts\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.487004 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-run-httpd\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.487062 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-log-httpd\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.491704 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-scripts\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.492870 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-config-data\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.507499 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrnvm\" (UniqueName: \"kubernetes.io/projected/ecbc1bfe-8eeb-4253-8403-325e458f7a52-kube-api-access-hrnvm\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.511418 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.523677 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.588486 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.588619 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8l9l\" (UniqueName: \"kubernetes.io/projected/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-kube-api-access-z8l9l\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.588661 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.591680 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.592545 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.596201 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.606072 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8l9l\" (UniqueName: \"kubernetes.io/projected/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-kube-api-access-z8l9l\") pod \"nova-cell0-conductor-0\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:06 crc kubenswrapper[4792]: I0319 17:09:06.691611 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:07 crc kubenswrapper[4792]: I0319 17:09:07.061420 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r85rk" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="registry-server" probeResult="failure" output=< Mar 19 17:09:07 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:09:07 crc kubenswrapper[4792]: > Mar 19 17:09:07 crc kubenswrapper[4792]: I0319 17:09:07.184715 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:07 crc kubenswrapper[4792]: I0319 17:09:07.206799 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerStarted","Data":"447c08e28589ca29c164b9b96a6e800638464dad86e7f6959f8163537270bdf7"} Mar 19 17:09:07 crc kubenswrapper[4792]: I0319 17:09:07.343937 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:07 crc kubenswrapper[4792]: I0319 17:09:07.756511 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56847f1-ff06-4020-a851-b79384fe6692" path="/var/lib/kubelet/pods/d56847f1-ff06-4020-a851-b79384fe6692/volumes" Mar 19 17:09:08 crc kubenswrapper[4792]: I0319 17:09:08.222570 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerStarted","Data":"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25"} Mar 19 17:09:08 crc kubenswrapper[4792]: I0319 17:09:08.224266 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2","Type":"ContainerStarted","Data":"28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa"} Mar 19 17:09:08 crc kubenswrapper[4792]: I0319 17:09:08.224314 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2","Type":"ContainerStarted","Data":"fe6a9202f1252eb2a9a5965c49a75304763b4bc3307a25cda91d2372db5fc121"} Mar 19 17:09:08 crc kubenswrapper[4792]: I0319 17:09:08.224441 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:08 crc kubenswrapper[4792]: I0319 17:09:08.241935 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.241710487 podStartE2EDuration="2.241710487s" podCreationTimestamp="2026-03-19 17:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:08.23961203 +0000 UTC m=+1711.385669570" watchObservedRunningTime="2026-03-19 17:09:08.241710487 +0000 UTC m=+1711.387768027" Mar 19 17:09:09 crc kubenswrapper[4792]: I0319 17:09:09.239907 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerStarted","Data":"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72"} Mar 19 17:09:10 crc kubenswrapper[4792]: I0319 17:09:10.250671 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerStarted","Data":"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b"} Mar 19 17:09:12 crc kubenswrapper[4792]: I0319 17:09:12.276222 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d7885af7-09a3-4ea4-b59f-2de96f42fd0b","Type":"ContainerStarted","Data":"c06fabd8d9057d0958eea1b53b04c27c1118a3f7a8382c3cb7c23729678eebbd"} Mar 19 17:09:12 crc kubenswrapper[4792]: I0319 17:09:12.293577 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.741788041 podStartE2EDuration="1m6.293557978s" podCreationTimestamp="2026-03-19 17:08:06 +0000 UTC" firstStartedPulling="2026-03-19 17:08:07.674677877 +0000 UTC m=+1650.820735417" lastFinishedPulling="2026-03-19 17:09:11.226447824 +0000 UTC m=+1714.372505354" observedRunningTime="2026-03-19 17:09:12.293189659 +0000 UTC m=+1715.439247209" watchObservedRunningTime="2026-03-19 17:09:12.293557978 +0000 UTC m=+1715.439615528" Mar 19 17:09:12 crc kubenswrapper[4792]: I0319 17:09:12.740685 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:09:12 crc kubenswrapper[4792]: E0319 17:09:12.741307 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:09:13 crc kubenswrapper[4792]: I0319 17:09:13.288367 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerStarted","Data":"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13"} Mar 19 17:09:13 crc kubenswrapper[4792]: I0319 17:09:13.288551 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:09:13 crc kubenswrapper[4792]: I0319 17:09:13.320189 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.271503586 podStartE2EDuration="7.320168911s" podCreationTimestamp="2026-03-19 17:09:06 +0000 UTC" firstStartedPulling="2026-03-19 17:09:07.185713629 +0000 UTC m=+1710.331771169" lastFinishedPulling="2026-03-19 17:09:12.234378954 +0000 UTC m=+1715.380436494" observedRunningTime="2026-03-19 17:09:13.307932616 +0000 UTC m=+1716.453990166" watchObservedRunningTime="2026-03-19 17:09:13.320168911 +0000 UTC m=+1716.466226451" Mar 19 17:09:15 crc kubenswrapper[4792]: I0319 17:09:15.412139 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="96125084-cfab-452e-9b96-6643e257344c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.233:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 17:09:16 crc kubenswrapper[4792]: I0319 17:09:16.058358 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:09:16 crc kubenswrapper[4792]: I0319 17:09:16.115675 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:09:16 crc kubenswrapper[4792]: I0319 17:09:16.306813 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r85rk"] Mar 19 17:09:16 crc kubenswrapper[4792]: I0319 17:09:16.448792 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:16 crc kubenswrapper[4792]: I0319 17:09:16.449043 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" containerName="nova-cell0-conductor-conductor" containerID="cri-o://28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" gracePeriod=30 Mar 19 17:09:16 crc kubenswrapper[4792]: E0319 17:09:16.452388 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 19 17:09:16 crc kubenswrapper[4792]: E0319 17:09:16.454624 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 19 17:09:16 crc kubenswrapper[4792]: E0319 17:09:16.458120 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 19 17:09:16 crc kubenswrapper[4792]: E0319 17:09:16.458176 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" containerName="nova-cell0-conductor-conductor" Mar 19 17:09:16 crc kubenswrapper[4792]: E0319 17:09:16.694513 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 19 17:09:16 crc kubenswrapper[4792]: E0319 17:09:16.696134 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 19 17:09:16 crc kubenswrapper[4792]: E0319 17:09:16.697292 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 19 17:09:16 crc kubenswrapper[4792]: E0319 17:09:16.697326 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" containerName="nova-cell0-conductor-conductor" Mar 19 17:09:17 crc kubenswrapper[4792]: I0319 17:09:17.358905 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r85rk" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="registry-server" containerID="cri-o://5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b" gracePeriod=2 Mar 19 17:09:17 crc kubenswrapper[4792]: I0319 17:09:17.890183 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:09:17 crc kubenswrapper[4792]: I0319 17:09:17.953247 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-utilities\") pod \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " Mar 19 17:09:17 crc kubenswrapper[4792]: I0319 17:09:17.953812 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-catalog-content\") pod \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " Mar 19 17:09:17 crc kubenswrapper[4792]: I0319 17:09:17.954012 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-utilities" (OuterVolumeSpecName: "utilities") pod "cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" (UID: "cacd4cdc-33d4-4160-ba55-81a3f1ca6b31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:17 crc kubenswrapper[4792]: I0319 17:09:17.954199 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9z9j\" (UniqueName: \"kubernetes.io/projected/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-kube-api-access-w9z9j\") pod \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\" (UID: \"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31\") " Mar 19 17:09:17 crc kubenswrapper[4792]: I0319 17:09:17.955125 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:17 crc kubenswrapper[4792]: I0319 17:09:17.960364 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-kube-api-access-w9z9j" (OuterVolumeSpecName: "kube-api-access-w9z9j") pod "cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" (UID: "cacd4cdc-33d4-4160-ba55-81a3f1ca6b31"). InnerVolumeSpecName "kube-api-access-w9z9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.018078 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" (UID: "cacd4cdc-33d4-4160-ba55-81a3f1ca6b31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.057430 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.057661 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9z9j\" (UniqueName: \"kubernetes.io/projected/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31-kube-api-access-w9z9j\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.259971 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.260205 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="ceilometer-central-agent" containerID="cri-o://a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25" gracePeriod=30 Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.260528 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="ceilometer-notification-agent" containerID="cri-o://99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72" gracePeriod=30 Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.260528 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="sg-core" containerID="cri-o://6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b" gracePeriod=30 Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.260547 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="proxy-httpd" containerID="cri-o://e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13" gracePeriod=30 Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.378133 4792 generic.go:334] "Generic (PLEG): container finished" podID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerID="5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b" exitCode=0 Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.378205 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r85rk" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.378209 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r85rk" event={"ID":"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31","Type":"ContainerDied","Data":"5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b"} Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.378437 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r85rk" event={"ID":"cacd4cdc-33d4-4160-ba55-81a3f1ca6b31","Type":"ContainerDied","Data":"e04b9d69efaa8a63c184d6b4c27d780e3187f26606e1702154358ca7b22ce932"} Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.378482 4792 scope.go:117] "RemoveContainer" containerID="5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.477632 4792 scope.go:117] "RemoveContainer" containerID="ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.484457 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r85rk"] Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.495420 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r85rk"] Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.561938 4792 scope.go:117] "RemoveContainer" containerID="9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.591106 4792 scope.go:117] "RemoveContainer" containerID="5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b" Mar 19 17:09:18 crc kubenswrapper[4792]: E0319 17:09:18.592293 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b\": container with ID starting with 5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b not found: ID does not exist" containerID="5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.592334 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b"} err="failed to get container status \"5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b\": rpc error: code = NotFound desc = could not find container \"5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b\": container with ID starting with 5333864f3142ed51e8059de111718cdde49f0e97b2f148969e2ce3f1fb4f046b not found: ID does not exist" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.592360 4792 scope.go:117] "RemoveContainer" containerID="ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2" Mar 19 17:09:18 crc kubenswrapper[4792]: E0319 17:09:18.592790 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2\": container with ID starting with ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2 not found: ID does not exist" containerID="ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.592814 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2"} err="failed to get container status \"ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2\": rpc error: code = NotFound desc = could not find container \"ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2\": container with ID starting with ad214c5b07607bf928ef91347217a21927ddd5fde3416d2ab5de62b96f1113d2 not found: ID does not exist" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.592828 4792 scope.go:117] "RemoveContainer" containerID="9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041" Mar 19 17:09:18 crc kubenswrapper[4792]: E0319 17:09:18.593395 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041\": container with ID starting with 9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041 not found: ID does not exist" containerID="9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041" Mar 19 17:09:18 crc kubenswrapper[4792]: I0319 17:09:18.593446 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041"} err="failed to get container status \"9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041\": rpc error: code = NotFound desc = could not find container \"9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041\": container with ID starting with 9f8d6ae6a917403efafcc471eda637409523ee4013251de793ef845e5f085041 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.069551 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.221041 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-combined-ca-bundle\") pod \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.221376 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-sg-core-conf-yaml\") pod \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.221438 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-log-httpd\") pod \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.221470 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-scripts\") pod \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.221543 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrnvm\" (UniqueName: \"kubernetes.io/projected/ecbc1bfe-8eeb-4253-8403-325e458f7a52-kube-api-access-hrnvm\") pod \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.221612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-config-data\") pod \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.221680 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-run-httpd\") pod \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\" (UID: \"ecbc1bfe-8eeb-4253-8403-325e458f7a52\") " Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.222122 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ecbc1bfe-8eeb-4253-8403-325e458f7a52" (UID: "ecbc1bfe-8eeb-4253-8403-325e458f7a52"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.222353 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ecbc1bfe-8eeb-4253-8403-325e458f7a52" (UID: "ecbc1bfe-8eeb-4253-8403-325e458f7a52"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.222725 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.222747 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecbc1bfe-8eeb-4253-8403-325e458f7a52-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.228594 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbc1bfe-8eeb-4253-8403-325e458f7a52-kube-api-access-hrnvm" (OuterVolumeSpecName: "kube-api-access-hrnvm") pod "ecbc1bfe-8eeb-4253-8403-325e458f7a52" (UID: "ecbc1bfe-8eeb-4253-8403-325e458f7a52"). InnerVolumeSpecName "kube-api-access-hrnvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.229559 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-scripts" (OuterVolumeSpecName: "scripts") pod "ecbc1bfe-8eeb-4253-8403-325e458f7a52" (UID: "ecbc1bfe-8eeb-4253-8403-325e458f7a52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.277734 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ecbc1bfe-8eeb-4253-8403-325e458f7a52" (UID: "ecbc1bfe-8eeb-4253-8403-325e458f7a52"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.325280 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.325329 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.325341 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrnvm\" (UniqueName: \"kubernetes.io/projected/ecbc1bfe-8eeb-4253-8403-325e458f7a52-kube-api-access-hrnvm\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.333760 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecbc1bfe-8eeb-4253-8403-325e458f7a52" (UID: "ecbc1bfe-8eeb-4253-8403-325e458f7a52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.358458 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-config-data" (OuterVolumeSpecName: "config-data") pod "ecbc1bfe-8eeb-4253-8403-325e458f7a52" (UID: "ecbc1bfe-8eeb-4253-8403-325e458f7a52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391226 4792 generic.go:334] "Generic (PLEG): container finished" podID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerID="e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13" exitCode=0 Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391260 4792 generic.go:334] "Generic (PLEG): container finished" podID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerID="6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b" exitCode=2 Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391269 4792 generic.go:334] "Generic (PLEG): container finished" podID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerID="99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72" exitCode=0 Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391277 4792 generic.go:334] "Generic (PLEG): container finished" podID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerID="a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25" exitCode=0 Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391315 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerDied","Data":"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13"} Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerDied","Data":"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b"} Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391353 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerDied","Data":"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72"} Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391362 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerDied","Data":"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25"} Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecbc1bfe-8eeb-4253-8403-325e458f7a52","Type":"ContainerDied","Data":"447c08e28589ca29c164b9b96a6e800638464dad86e7f6959f8163537270bdf7"} Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391393 4792 scope.go:117] "RemoveContainer" containerID="e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.391517 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.419979 4792 scope.go:117] "RemoveContainer" containerID="6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.427288 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.427416 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbc1bfe-8eeb-4253-8403-325e458f7a52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.435092 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.445581 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.448697 4792 scope.go:117] "RemoveContainer" containerID="99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.468978 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.469453 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="extract-content" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469471 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="extract-content" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.469497 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="extract-utilities" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469505 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="extract-utilities" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.469517 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="proxy-httpd" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469523 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="proxy-httpd" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.469533 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="ceilometer-central-agent" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469538 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="ceilometer-central-agent" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.469556 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="ceilometer-notification-agent" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469563 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="ceilometer-notification-agent" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.469597 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="registry-server" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469604 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="registry-server" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.469609 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="sg-core" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469615 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="sg-core" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469801 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" containerName="registry-server" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469815 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="ceilometer-notification-agent" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469821 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="sg-core" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469847 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="proxy-httpd" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.469859 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" containerName="ceilometer-central-agent" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.490476 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.472033 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.493894 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.494053 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.521063 4792 scope.go:117] "RemoveContainer" containerID="a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.599685 4792 scope.go:117] "RemoveContainer" containerID="e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.601708 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": container with ID starting with e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13 not found: ID does not exist" containerID="e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.601747 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13"} err="failed to get container status \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": rpc error: code = NotFound desc = could not find container \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": container with ID starting with e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.601780 4792 scope.go:117] "RemoveContainer" containerID="6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.603231 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": container with ID starting with 6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b not found: ID does not exist" containerID="6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.603265 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b"} err="failed to get container status \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": rpc error: code = NotFound desc = could not find container \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": container with ID starting with 6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.603281 4792 scope.go:117] "RemoveContainer" containerID="99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.605305 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": container with ID starting with 99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72 not found: ID does not exist" containerID="99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.605342 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72"} err="failed to get container status \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": rpc error: code = NotFound desc = could not find container \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": container with ID starting with 99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.605367 4792 scope.go:117] "RemoveContainer" containerID="a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25" Mar 19 17:09:19 crc kubenswrapper[4792]: E0319 17:09:19.605622 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": container with ID starting with a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25 not found: ID does not exist" containerID="a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.605649 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25"} err="failed to get container status \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": rpc error: code = NotFound desc = could not find container \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": container with ID starting with a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.605664 4792 scope.go:117] "RemoveContainer" containerID="e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.605883 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13"} err="failed to get container status \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": rpc error: code = NotFound desc = could not find container \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": container with ID starting with e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.605910 4792 scope.go:117] "RemoveContainer" containerID="6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.606067 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b"} err="failed to get container status \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": rpc error: code = NotFound desc = could not find container \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": container with ID starting with 6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.606084 4792 scope.go:117] "RemoveContainer" containerID="99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.606381 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72"} err="failed to get container status \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": rpc error: code = NotFound desc = could not find container \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": container with ID starting with 99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.606401 4792 scope.go:117] "RemoveContainer" containerID="a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.606588 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25"} err="failed to get container status \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": rpc error: code = NotFound desc = could not find container \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": container with ID starting with a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.606610 4792 scope.go:117] "RemoveContainer" containerID="e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.606899 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13"} err="failed to get container status \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": rpc error: code = NotFound desc = could not find container \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": container with ID starting with e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.606917 4792 scope.go:117] "RemoveContainer" containerID="6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607198 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b"} err="failed to get container status \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": rpc error: code = NotFound desc = could not find container \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": container with ID starting with 6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607217 4792 scope.go:117] "RemoveContainer" containerID="99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607388 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72"} err="failed to get container status \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": rpc error: code = NotFound desc = could not find container \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": container with ID starting with 99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607407 4792 scope.go:117] "RemoveContainer" containerID="a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607532 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25"} err="failed to get container status \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": rpc error: code = NotFound desc = could not find container \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": container with ID starting with a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607547 4792 scope.go:117] "RemoveContainer" containerID="e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607686 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13"} err="failed to get container status \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": rpc error: code = NotFound desc = could not find container \"e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13\": container with ID starting with e4521d542706d0fcf499369b89d8ab3e491e0f6bbfd78af351374d9dfbde9a13 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607704 4792 scope.go:117] "RemoveContainer" containerID="6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607881 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b"} err="failed to get container status \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": rpc error: code = NotFound desc = could not find container \"6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b\": container with ID starting with 6d52ba5c483b5a527a8ee799cd1707e263d7bbf44002015fbe5cf3596139249b not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.607904 4792 scope.go:117] "RemoveContainer" containerID="99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.608065 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72"} err="failed to get container status \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": rpc error: code = NotFound desc = could not find container \"99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72\": container with ID starting with 99ade36059651342f45e1214bbf304cc9b93df0529a3e7cea58cfb396d0a2a72 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.608093 4792 scope.go:117] "RemoveContainer" containerID="a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.608830 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25"} err="failed to get container status \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": rpc error: code = NotFound desc = could not find container \"a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25\": container with ID starting with a84d890bb16bb67dfd854f7e6b681e729a938eb59000257fdd5fc67c51955e25 not found: ID does not exist" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.631563 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-config-data\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.631614 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frpz\" (UniqueName: \"kubernetes.io/projected/8747be1c-f861-45f8-940d-12540a75c6a9-kube-api-access-9frpz\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.631816 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-run-httpd\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.631883 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-scripts\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.631924 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.632005 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-log-httpd\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.632044 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.733371 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-scripts\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.733743 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-run-httpd\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.733772 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.733855 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-log-httpd\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.734302 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-log-httpd\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.734343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-run-httpd\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.734428 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.734621 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-config-data\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.734692 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frpz\" (UniqueName: \"kubernetes.io/projected/8747be1c-f861-45f8-940d-12540a75c6a9-kube-api-access-9frpz\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.749813 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.754761 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.755435 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-config-data\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.757621 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cacd4cdc-33d4-4160-ba55-81a3f1ca6b31" path="/var/lib/kubelet/pods/cacd4cdc-33d4-4160-ba55-81a3f1ca6b31/volumes" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.757778 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-scripts\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.758723 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbc1bfe-8eeb-4253-8403-325e458f7a52" path="/var/lib/kubelet/pods/ecbc1bfe-8eeb-4253-8403-325e458f7a52/volumes" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.760464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frpz\" (UniqueName: \"kubernetes.io/projected/8747be1c-f861-45f8-940d-12540a75c6a9-kube-api-access-9frpz\") pod \"ceilometer-0\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " pod="openstack/ceilometer-0" Mar 19 17:09:19 crc kubenswrapper[4792]: I0319 17:09:19.888612 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.002120 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.148178 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-combined-ca-bundle\") pod \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.148755 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-config-data\") pod \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.148851 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8l9l\" (UniqueName: \"kubernetes.io/projected/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-kube-api-access-z8l9l\") pod \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\" (UID: \"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2\") " Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.159396 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-kube-api-access-z8l9l" (OuterVolumeSpecName: "kube-api-access-z8l9l") pod "a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" (UID: "a37070c2-7fa6-40d6-9452-0fd38ff8e4d2"). InnerVolumeSpecName "kube-api-access-z8l9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.192629 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-config-data" (OuterVolumeSpecName: "config-data") pod "a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" (UID: "a37070c2-7fa6-40d6-9452-0fd38ff8e4d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.202304 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" (UID: "a37070c2-7fa6-40d6-9452-0fd38ff8e4d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.251360 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.251398 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.251409 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8l9l\" (UniqueName: \"kubernetes.io/projected/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2-kube-api-access-z8l9l\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.370760 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.405660 4792 generic.go:334] "Generic (PLEG): container finished" podID="a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" exitCode=0 Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.405725 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2","Type":"ContainerDied","Data":"28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa"} Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.405754 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a37070c2-7fa6-40d6-9452-0fd38ff8e4d2","Type":"ContainerDied","Data":"fe6a9202f1252eb2a9a5965c49a75304763b4bc3307a25cda91d2372db5fc121"} Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.405771 4792 scope.go:117] "RemoveContainer" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.405754 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.407334 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerStarted","Data":"710484fc0df7d134c8b1a8be63217dec2c8861fccecd3470c580dc33cbf6fd39"} Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.430111 4792 scope.go:117] "RemoveContainer" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" Mar 19 17:09:20 crc kubenswrapper[4792]: E0319 17:09:20.430583 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa\": container with ID starting with 28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa not found: ID does not exist" containerID="28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.430627 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa"} err="failed to get container status \"28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa\": rpc error: code = NotFound desc = could not find container \"28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa\": container with ID starting with 28ceca3885aa50a7cd4a642b227ecc7574a1f30af38bf0ff7308ef5971338efa not found: ID does not exist" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.469116 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.487798 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.511498 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:20 crc kubenswrapper[4792]: E0319 17:09:20.512075 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" containerName="nova-cell0-conductor-conductor" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.512091 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" containerName="nova-cell0-conductor-conductor" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.512399 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" containerName="nova-cell0-conductor-conductor" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.513308 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.515090 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8x4rw" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.516125 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.533379 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.665539 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4354ccf-6194-4050-9e4e-342d090f707d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.665654 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4354ccf-6194-4050-9e4e-342d090f707d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.665724 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhxc5\" (UniqueName: \"kubernetes.io/projected/d4354ccf-6194-4050-9e4e-342d090f707d-kube-api-access-nhxc5\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.768105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4354ccf-6194-4050-9e4e-342d090f707d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.768291 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4354ccf-6194-4050-9e4e-342d090f707d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.769015 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhxc5\" (UniqueName: \"kubernetes.io/projected/d4354ccf-6194-4050-9e4e-342d090f707d-kube-api-access-nhxc5\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.774647 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4354ccf-6194-4050-9e4e-342d090f707d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.787888 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4354ccf-6194-4050-9e4e-342d090f707d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.790745 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhxc5\" (UniqueName: \"kubernetes.io/projected/d4354ccf-6194-4050-9e4e-342d090f707d-kube-api-access-nhxc5\") pod \"nova-cell0-conductor-0\" (UID: \"d4354ccf-6194-4050-9e4e-342d090f707d\") " pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:20 crc kubenswrapper[4792]: I0319 17:09:20.859266 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:21 crc kubenswrapper[4792]: I0319 17:09:21.184595 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:21 crc kubenswrapper[4792]: I0319 17:09:21.432137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerStarted","Data":"9cf45f92e9a96c8ba4707b51a7f72c3b7eaa62b2649a3b568161ed531e235aae"} Mar 19 17:09:21 crc kubenswrapper[4792]: W0319 17:09:21.484998 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4354ccf_6194_4050_9e4e_342d090f707d.slice/crio-e3f313f2f7aea58d5f69ca5fd498159d9fdba06a8cf772d2297bbc00bbf5fe25 WatchSource:0}: Error finding container e3f313f2f7aea58d5f69ca5fd498159d9fdba06a8cf772d2297bbc00bbf5fe25: Status 404 returned error can't find the container with id e3f313f2f7aea58d5f69ca5fd498159d9fdba06a8cf772d2297bbc00bbf5fe25 Mar 19 17:09:21 crc kubenswrapper[4792]: I0319 17:09:21.500067 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 17:09:21 crc kubenswrapper[4792]: I0319 17:09:21.754527 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37070c2-7fa6-40d6-9452-0fd38ff8e4d2" path="/var/lib/kubelet/pods/a37070c2-7fa6-40d6-9452-0fd38ff8e4d2/volumes" Mar 19 17:09:22 crc kubenswrapper[4792]: I0319 17:09:22.443599 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerStarted","Data":"61e42c614521d48911493ccfa70c4ea5569e50c319f3dced5eb9cd99a0690542"} Mar 19 17:09:22 crc kubenswrapper[4792]: I0319 17:09:22.446602 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4354ccf-6194-4050-9e4e-342d090f707d","Type":"ContainerStarted","Data":"e75464b8d5a7d9b730e7cec3d72c2e1507b51ed8d43758a29a5e6b253a608aa5"} Mar 19 17:09:22 crc kubenswrapper[4792]: I0319 17:09:22.446626 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4354ccf-6194-4050-9e4e-342d090f707d","Type":"ContainerStarted","Data":"e3f313f2f7aea58d5f69ca5fd498159d9fdba06a8cf772d2297bbc00bbf5fe25"} Mar 19 17:09:22 crc kubenswrapper[4792]: I0319 17:09:22.446872 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:22 crc kubenswrapper[4792]: I0319 17:09:22.467583 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.467564013 podStartE2EDuration="2.467564013s" podCreationTimestamp="2026-03-19 17:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:22.465176348 +0000 UTC m=+1725.611233898" watchObservedRunningTime="2026-03-19 17:09:22.467564013 +0000 UTC m=+1725.613621553" Mar 19 17:09:23 crc kubenswrapper[4792]: I0319 17:09:23.459245 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerStarted","Data":"41a76fe745553c6ac39b75d14362d4eda039db9b1ea1b78b2c182dab32799795"} Mar 19 17:09:23 crc kubenswrapper[4792]: I0319 17:09:23.740158 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:09:23 crc kubenswrapper[4792]: E0319 17:09:23.740532 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:09:25 crc kubenswrapper[4792]: I0319 17:09:25.496436 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="ceilometer-central-agent" containerID="cri-o://9cf45f92e9a96c8ba4707b51a7f72c3b7eaa62b2649a3b568161ed531e235aae" gracePeriod=30 Mar 19 17:09:25 crc kubenswrapper[4792]: I0319 17:09:25.496502 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="proxy-httpd" containerID="cri-o://9b4e856dca76240e8e917b59ea2a6c3c163860e0529153d0445d53997007f189" gracePeriod=30 Mar 19 17:09:25 crc kubenswrapper[4792]: I0319 17:09:25.496550 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="ceilometer-notification-agent" containerID="cri-o://61e42c614521d48911493ccfa70c4ea5569e50c319f3dced5eb9cd99a0690542" gracePeriod=30 Mar 19 17:09:25 crc kubenswrapper[4792]: I0319 17:09:25.496594 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="sg-core" containerID="cri-o://41a76fe745553c6ac39b75d14362d4eda039db9b1ea1b78b2c182dab32799795" gracePeriod=30 Mar 19 17:09:25 crc kubenswrapper[4792]: I0319 17:09:25.496611 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerStarted","Data":"9b4e856dca76240e8e917b59ea2a6c3c163860e0529153d0445d53997007f189"} Mar 19 17:09:25 crc kubenswrapper[4792]: I0319 17:09:25.496941 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:09:25 crc kubenswrapper[4792]: I0319 17:09:25.524157 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9961741210000001 podStartE2EDuration="6.524140541s" podCreationTimestamp="2026-03-19 17:09:19 +0000 UTC" firstStartedPulling="2026-03-19 17:09:20.36812994 +0000 UTC m=+1723.514187480" lastFinishedPulling="2026-03-19 17:09:24.89609636 +0000 UTC m=+1728.042153900" observedRunningTime="2026-03-19 17:09:25.517451117 +0000 UTC m=+1728.663508677" watchObservedRunningTime="2026-03-19 17:09:25.524140541 +0000 UTC m=+1728.670198091" Mar 19 17:09:26 crc kubenswrapper[4792]: I0319 17:09:26.508695 4792 generic.go:334] "Generic (PLEG): container finished" podID="8747be1c-f861-45f8-940d-12540a75c6a9" containerID="9b4e856dca76240e8e917b59ea2a6c3c163860e0529153d0445d53997007f189" exitCode=0 Mar 19 17:09:26 crc kubenswrapper[4792]: I0319 17:09:26.508729 4792 generic.go:334] "Generic (PLEG): container finished" podID="8747be1c-f861-45f8-940d-12540a75c6a9" containerID="41a76fe745553c6ac39b75d14362d4eda039db9b1ea1b78b2c182dab32799795" exitCode=2 Mar 19 17:09:26 crc kubenswrapper[4792]: I0319 17:09:26.508737 4792 generic.go:334] "Generic (PLEG): container finished" podID="8747be1c-f861-45f8-940d-12540a75c6a9" containerID="61e42c614521d48911493ccfa70c4ea5569e50c319f3dced5eb9cd99a0690542" exitCode=0 Mar 19 17:09:26 crc kubenswrapper[4792]: I0319 17:09:26.508757 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerDied","Data":"9b4e856dca76240e8e917b59ea2a6c3c163860e0529153d0445d53997007f189"} Mar 19 17:09:26 crc kubenswrapper[4792]: I0319 17:09:26.508782 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerDied","Data":"41a76fe745553c6ac39b75d14362d4eda039db9b1ea1b78b2c182dab32799795"} Mar 19 17:09:26 crc kubenswrapper[4792]: I0319 17:09:26.508792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerDied","Data":"61e42c614521d48911493ccfa70c4ea5569e50c319f3dced5eb9cd99a0690542"} Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.131288 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-wsz2l"] Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.133535 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.142087 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-3376-account-create-update-ps95w"] Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.143541 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.145364 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.156629 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-wsz2l"] Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.167198 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-3376-account-create-update-ps95w"] Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.332250 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-operator-scripts\") pod \"aodh-db-create-wsz2l\" (UID: \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\") " pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.332401 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhks\" (UniqueName: \"kubernetes.io/projected/c576dde9-2cc3-4403-a106-7c9cb996287e-kube-api-access-gdhks\") pod \"aodh-3376-account-create-update-ps95w\" (UID: \"c576dde9-2cc3-4403-a106-7c9cb996287e\") " pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.332487 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdv4f\" (UniqueName: \"kubernetes.io/projected/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-kube-api-access-hdv4f\") pod \"aodh-db-create-wsz2l\" (UID: \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\") " pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.332660 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c576dde9-2cc3-4403-a106-7c9cb996287e-operator-scripts\") pod \"aodh-3376-account-create-update-ps95w\" (UID: \"c576dde9-2cc3-4403-a106-7c9cb996287e\") " pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.434470 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c576dde9-2cc3-4403-a106-7c9cb996287e-operator-scripts\") pod \"aodh-3376-account-create-update-ps95w\" (UID: \"c576dde9-2cc3-4403-a106-7c9cb996287e\") " pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.434548 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-operator-scripts\") pod \"aodh-db-create-wsz2l\" (UID: \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\") " pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.434633 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdhks\" (UniqueName: \"kubernetes.io/projected/c576dde9-2cc3-4403-a106-7c9cb996287e-kube-api-access-gdhks\") pod \"aodh-3376-account-create-update-ps95w\" (UID: \"c576dde9-2cc3-4403-a106-7c9cb996287e\") " pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.434676 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdv4f\" (UniqueName: \"kubernetes.io/projected/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-kube-api-access-hdv4f\") pod \"aodh-db-create-wsz2l\" (UID: \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\") " pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.435509 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-operator-scripts\") pod \"aodh-db-create-wsz2l\" (UID: \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\") " pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.435751 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c576dde9-2cc3-4403-a106-7c9cb996287e-operator-scripts\") pod \"aodh-3376-account-create-update-ps95w\" (UID: \"c576dde9-2cc3-4403-a106-7c9cb996287e\") " pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.454214 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdv4f\" (UniqueName: \"kubernetes.io/projected/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-kube-api-access-hdv4f\") pod \"aodh-db-create-wsz2l\" (UID: \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\") " pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.455465 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdhks\" (UniqueName: \"kubernetes.io/projected/c576dde9-2cc3-4403-a106-7c9cb996287e-kube-api-access-gdhks\") pod \"aodh-3376-account-create-update-ps95w\" (UID: \"c576dde9-2cc3-4403-a106-7c9cb996287e\") " pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.497469 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:27 crc kubenswrapper[4792]: I0319 17:09:27.509764 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:28 crc kubenswrapper[4792]: W0319 17:09:28.036106 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd79db6b_b7c7_4d8f_9b7c_c853501d6706.slice/crio-2661c1f8e027cdb393f56221eedfb8e28690cd3742117d5cbed9f2c00f0980bd WatchSource:0}: Error finding container 2661c1f8e027cdb393f56221eedfb8e28690cd3742117d5cbed9f2c00f0980bd: Status 404 returned error can't find the container with id 2661c1f8e027cdb393f56221eedfb8e28690cd3742117d5cbed9f2c00f0980bd Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.050303 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-wsz2l"] Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.139660 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-3376-account-create-update-ps95w"] Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.544408 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3376-account-create-update-ps95w" event={"ID":"c576dde9-2cc3-4403-a106-7c9cb996287e","Type":"ContainerStarted","Data":"3a25d023da25c8874e36a26c96eb34acea89b96666086b5919a1d654552919d2"} Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.544789 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3376-account-create-update-ps95w" event={"ID":"c576dde9-2cc3-4403-a106-7c9cb996287e","Type":"ContainerStarted","Data":"e0e236fa6e2307a66b98cd9e6a19e5b1a2f29754385197bb3bec45507238e99e"} Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.546766 4792 generic.go:334] "Generic (PLEG): container finished" podID="dd79db6b-b7c7-4d8f-9b7c-c853501d6706" containerID="54237895011b42f1e1ef761f2c51fd652049608d60e7ed9708daa7fcc1061f55" exitCode=0 Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.546792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wsz2l" event={"ID":"dd79db6b-b7c7-4d8f-9b7c-c853501d6706","Type":"ContainerDied","Data":"54237895011b42f1e1ef761f2c51fd652049608d60e7ed9708daa7fcc1061f55"} Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.546820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wsz2l" event={"ID":"dd79db6b-b7c7-4d8f-9b7c-c853501d6706","Type":"ContainerStarted","Data":"2661c1f8e027cdb393f56221eedfb8e28690cd3742117d5cbed9f2c00f0980bd"} Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.553631 4792 generic.go:334] "Generic (PLEG): container finished" podID="8747be1c-f861-45f8-940d-12540a75c6a9" containerID="9cf45f92e9a96c8ba4707b51a7f72c3b7eaa62b2649a3b568161ed531e235aae" exitCode=0 Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.553679 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerDied","Data":"9cf45f92e9a96c8ba4707b51a7f72c3b7eaa62b2649a3b568161ed531e235aae"} Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.573537 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-3376-account-create-update-ps95w" podStartSLOduration=1.573519782 podStartE2EDuration="1.573519782s" podCreationTimestamp="2026-03-19 17:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:28.561280245 +0000 UTC m=+1731.707337785" watchObservedRunningTime="2026-03-19 17:09:28.573519782 +0000 UTC m=+1731.719577332" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.758850 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-log-httpd\") pod \"8747be1c-f861-45f8-940d-12540a75c6a9\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880156 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-run-httpd\") pod \"8747be1c-f861-45f8-940d-12540a75c6a9\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880251 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-scripts\") pod \"8747be1c-f861-45f8-940d-12540a75c6a9\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880371 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-combined-ca-bundle\") pod \"8747be1c-f861-45f8-940d-12540a75c6a9\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880417 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9frpz\" (UniqueName: \"kubernetes.io/projected/8747be1c-f861-45f8-940d-12540a75c6a9-kube-api-access-9frpz\") pod \"8747be1c-f861-45f8-940d-12540a75c6a9\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880670 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-sg-core-conf-yaml\") pod \"8747be1c-f861-45f8-940d-12540a75c6a9\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880735 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8747be1c-f861-45f8-940d-12540a75c6a9" (UID: "8747be1c-f861-45f8-940d-12540a75c6a9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880753 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-config-data\") pod \"8747be1c-f861-45f8-940d-12540a75c6a9\" (UID: \"8747be1c-f861-45f8-940d-12540a75c6a9\") " Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.880958 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8747be1c-f861-45f8-940d-12540a75c6a9" (UID: "8747be1c-f861-45f8-940d-12540a75c6a9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.881947 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.881974 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8747be1c-f861-45f8-940d-12540a75c6a9-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.886434 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8747be1c-f861-45f8-940d-12540a75c6a9-kube-api-access-9frpz" (OuterVolumeSpecName: "kube-api-access-9frpz") pod "8747be1c-f861-45f8-940d-12540a75c6a9" (UID: "8747be1c-f861-45f8-940d-12540a75c6a9"). InnerVolumeSpecName "kube-api-access-9frpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.888080 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-scripts" (OuterVolumeSpecName: "scripts") pod "8747be1c-f861-45f8-940d-12540a75c6a9" (UID: "8747be1c-f861-45f8-940d-12540a75c6a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.921743 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8747be1c-f861-45f8-940d-12540a75c6a9" (UID: "8747be1c-f861-45f8-940d-12540a75c6a9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.974476 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8747be1c-f861-45f8-940d-12540a75c6a9" (UID: "8747be1c-f861-45f8-940d-12540a75c6a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.984851 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.984908 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9frpz\" (UniqueName: \"kubernetes.io/projected/8747be1c-f861-45f8-940d-12540a75c6a9-kube-api-access-9frpz\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.984922 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:28 crc kubenswrapper[4792]: I0319 17:09:28.984932 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.006638 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-config-data" (OuterVolumeSpecName: "config-data") pod "8747be1c-f861-45f8-940d-12540a75c6a9" (UID: "8747be1c-f861-45f8-940d-12540a75c6a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.085696 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8747be1c-f861-45f8-940d-12540a75c6a9-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.370125 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9xm4s"] Mar 19 17:09:29 crc kubenswrapper[4792]: E0319 17:09:29.370701 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="ceilometer-notification-agent" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.370725 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="ceilometer-notification-agent" Mar 19 17:09:29 crc kubenswrapper[4792]: E0319 17:09:29.370756 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="ceilometer-central-agent" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.370764 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="ceilometer-central-agent" Mar 19 17:09:29 crc kubenswrapper[4792]: E0319 17:09:29.370791 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="proxy-httpd" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.370797 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="proxy-httpd" Mar 19 17:09:29 crc kubenswrapper[4792]: E0319 17:09:29.370818 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="sg-core" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.370824 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="sg-core" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.371047 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="sg-core" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.371069 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="ceilometer-central-agent" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.371082 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="ceilometer-notification-agent" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.371096 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" containerName="proxy-httpd" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.372757 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.400396 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-catalog-content\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.400644 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xm4s"] Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.400850 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mqnt\" (UniqueName: \"kubernetes.io/projected/eccea905-8d78-4f68-865f-af56721dbe2d-kube-api-access-9mqnt\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.401364 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-utilities\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.505765 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-utilities\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.506144 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-catalog-content\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.506299 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mqnt\" (UniqueName: \"kubernetes.io/projected/eccea905-8d78-4f68-865f-af56721dbe2d-kube-api-access-9mqnt\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.506326 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-utilities\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.506740 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-catalog-content\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.527335 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mqnt\" (UniqueName: \"kubernetes.io/projected/eccea905-8d78-4f68-865f-af56721dbe2d-kube-api-access-9mqnt\") pod \"community-operators-9xm4s\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.567286 4792 generic.go:334] "Generic (PLEG): container finished" podID="c576dde9-2cc3-4403-a106-7c9cb996287e" containerID="3a25d023da25c8874e36a26c96eb34acea89b96666086b5919a1d654552919d2" exitCode=0 Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.567326 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3376-account-create-update-ps95w" event={"ID":"c576dde9-2cc3-4403-a106-7c9cb996287e","Type":"ContainerDied","Data":"3a25d023da25c8874e36a26c96eb34acea89b96666086b5919a1d654552919d2"} Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.570832 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8747be1c-f861-45f8-940d-12540a75c6a9","Type":"ContainerDied","Data":"710484fc0df7d134c8b1a8be63217dec2c8861fccecd3470c580dc33cbf6fd39"} Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.570909 4792 scope.go:117] "RemoveContainer" containerID="9b4e856dca76240e8e917b59ea2a6c3c163860e0529153d0445d53997007f189" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.571079 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.632111 4792 scope.go:117] "RemoveContainer" containerID="41a76fe745553c6ac39b75d14362d4eda039db9b1ea1b78b2c182dab32799795" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.632289 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.641533 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.669213 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.673022 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.674115 4792 scope.go:117] "RemoveContainer" containerID="61e42c614521d48911493ccfa70c4ea5569e50c319f3dced5eb9cd99a0690542" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.675859 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.676054 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.678524 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.704591 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.713772 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.713911 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-scripts\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.713980 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-log-httpd\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.714852 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-config-data\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.715414 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.715967 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-run-httpd\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.717020 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxlck\" (UniqueName: \"kubernetes.io/projected/adc76bf7-5198-40e1-8a3b-0be22a391686-kube-api-access-xxlck\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.728180 4792 scope.go:117] "RemoveContainer" containerID="9cf45f92e9a96c8ba4707b51a7f72c3b7eaa62b2649a3b568161ed531e235aae" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.755698 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8747be1c-f861-45f8-940d-12540a75c6a9" path="/var/lib/kubelet/pods/8747be1c-f861-45f8-940d-12540a75c6a9/volumes" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.820590 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-config-data\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.820702 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.820741 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-run-httpd\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.820900 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxlck\" (UniqueName: \"kubernetes.io/projected/adc76bf7-5198-40e1-8a3b-0be22a391686-kube-api-access-xxlck\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.820994 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.821079 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-scripts\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.821179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-log-httpd\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.822695 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-run-httpd\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.823503 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-log-httpd\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.825916 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.827089 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-config-data\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.828508 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.835593 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-scripts\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:29 crc kubenswrapper[4792]: I0319 17:09:29.840379 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxlck\" (UniqueName: \"kubernetes.io/projected/adc76bf7-5198-40e1-8a3b-0be22a391686-kube-api-access-xxlck\") pod \"ceilometer-0\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " pod="openstack/ceilometer-0" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.007022 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.017237 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.026344 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-operator-scripts\") pod \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\" (UID: \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\") " Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.026412 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdv4f\" (UniqueName: \"kubernetes.io/projected/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-kube-api-access-hdv4f\") pod \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\" (UID: \"dd79db6b-b7c7-4d8f-9b7c-c853501d6706\") " Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.027179 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd79db6b-b7c7-4d8f-9b7c-c853501d6706" (UID: "dd79db6b-b7c7-4d8f-9b7c-c853501d6706"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.035513 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-kube-api-access-hdv4f" (OuterVolumeSpecName: "kube-api-access-hdv4f") pod "dd79db6b-b7c7-4d8f-9b7c-c853501d6706" (UID: "dd79db6b-b7c7-4d8f-9b7c-c853501d6706"). InnerVolumeSpecName "kube-api-access-hdv4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.131546 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.131820 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdv4f\" (UniqueName: \"kubernetes.io/projected/dd79db6b-b7c7-4d8f-9b7c-c853501d6706-kube-api-access-hdv4f\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.344834 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xm4s"] Mar 19 17:09:30 crc kubenswrapper[4792]: W0319 17:09:30.349479 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeccea905_8d78_4f68_865f_af56721dbe2d.slice/crio-052a0f6ba21a217b4821ef2287a375cf69d8ad570663b5bbe2ff40c73ca4527e WatchSource:0}: Error finding container 052a0f6ba21a217b4821ef2287a375cf69d8ad570663b5bbe2ff40c73ca4527e: Status 404 returned error can't find the container with id 052a0f6ba21a217b4821ef2287a375cf69d8ad570663b5bbe2ff40c73ca4527e Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.577169 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.612026 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wsz2l" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.612014 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wsz2l" event={"ID":"dd79db6b-b7c7-4d8f-9b7c-c853501d6706","Type":"ContainerDied","Data":"2661c1f8e027cdb393f56221eedfb8e28690cd3742117d5cbed9f2c00f0980bd"} Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.614774 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2661c1f8e027cdb393f56221eedfb8e28690cd3742117d5cbed9f2c00f0980bd" Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.632690 4792 generic.go:334] "Generic (PLEG): container finished" podID="eccea905-8d78-4f68-865f-af56721dbe2d" containerID="bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b" exitCode=0 Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.633085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm4s" event={"ID":"eccea905-8d78-4f68-865f-af56721dbe2d","Type":"ContainerDied","Data":"bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b"} Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.633159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm4s" event={"ID":"eccea905-8d78-4f68-865f-af56721dbe2d","Type":"ContainerStarted","Data":"052a0f6ba21a217b4821ef2287a375cf69d8ad570663b5bbe2ff40c73ca4527e"} Mar 19 17:09:30 crc kubenswrapper[4792]: I0319 17:09:30.890774 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.084956 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.166231 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c576dde9-2cc3-4403-a106-7c9cb996287e-operator-scripts\") pod \"c576dde9-2cc3-4403-a106-7c9cb996287e\" (UID: \"c576dde9-2cc3-4403-a106-7c9cb996287e\") " Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.166751 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdhks\" (UniqueName: \"kubernetes.io/projected/c576dde9-2cc3-4403-a106-7c9cb996287e-kube-api-access-gdhks\") pod \"c576dde9-2cc3-4403-a106-7c9cb996287e\" (UID: \"c576dde9-2cc3-4403-a106-7c9cb996287e\") " Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.167240 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c576dde9-2cc3-4403-a106-7c9cb996287e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c576dde9-2cc3-4403-a106-7c9cb996287e" (UID: "c576dde9-2cc3-4403-a106-7c9cb996287e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.170828 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c576dde9-2cc3-4403-a106-7c9cb996287e-kube-api-access-gdhks" (OuterVolumeSpecName: "kube-api-access-gdhks") pod "c576dde9-2cc3-4403-a106-7c9cb996287e" (UID: "c576dde9-2cc3-4403-a106-7c9cb996287e"). InnerVolumeSpecName "kube-api-access-gdhks". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.269340 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdhks\" (UniqueName: \"kubernetes.io/projected/c576dde9-2cc3-4403-a106-7c9cb996287e-kube-api-access-gdhks\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.269370 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c576dde9-2cc3-4403-a106-7c9cb996287e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.483932 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mfkfn"] Mar 19 17:09:31 crc kubenswrapper[4792]: E0319 17:09:31.484915 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd79db6b-b7c7-4d8f-9b7c-c853501d6706" containerName="mariadb-database-create" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.484989 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd79db6b-b7c7-4d8f-9b7c-c853501d6706" containerName="mariadb-database-create" Mar 19 17:09:31 crc kubenswrapper[4792]: E0319 17:09:31.485060 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c576dde9-2cc3-4403-a106-7c9cb996287e" containerName="mariadb-account-create-update" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.485122 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c576dde9-2cc3-4403-a106-7c9cb996287e" containerName="mariadb-account-create-update" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.485401 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd79db6b-b7c7-4d8f-9b7c-c853501d6706" containerName="mariadb-database-create" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.485467 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c576dde9-2cc3-4403-a106-7c9cb996287e" containerName="mariadb-account-create-update" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.486507 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.490671 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.490843 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.513284 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mfkfn"] Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.576514 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-config-data\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.576629 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcpsw\" (UniqueName: \"kubernetes.io/projected/41c27def-27bf-4b67-abcf-428ff60a77bd-kube-api-access-lcpsw\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.576721 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.576746 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-scripts\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.671341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerStarted","Data":"e5a0db264e2776ed2eed7d5de042d6eb1dea421af2015f0b3882c4085dba0ee9"} Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.671395 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerStarted","Data":"1f92649bed9f69bd9474c1b964629b8464c556db38a5a7c556bc0980666e84e9"} Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.679221 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcpsw\" (UniqueName: \"kubernetes.io/projected/41c27def-27bf-4b67-abcf-428ff60a77bd-kube-api-access-lcpsw\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.679317 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.679341 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-scripts\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.679430 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-config-data\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.688802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.702431 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-scripts\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.702823 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3376-account-create-update-ps95w" event={"ID":"c576dde9-2cc3-4403-a106-7c9cb996287e","Type":"ContainerDied","Data":"e0e236fa6e2307a66b98cd9e6a19e5b1a2f29754385197bb3bec45507238e99e"} Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.702857 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e236fa6e2307a66b98cd9e6a19e5b1a2f29754385197bb3bec45507238e99e" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.702938 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3376-account-create-update-ps95w" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.704748 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-config-data\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.727738 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.751347 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.762599 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcpsw\" (UniqueName: \"kubernetes.io/projected/41c27def-27bf-4b67-abcf-428ff60a77bd-kube-api-access-lcpsw\") pod \"nova-cell0-cell-mapping-mfkfn\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.762995 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.788808 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078c8c6b-0852-41c2-8114-6ec521760afc-logs\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.789004 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5t5\" (UniqueName: \"kubernetes.io/projected/078c8c6b-0852-41c2-8114-6ec521760afc-kube-api-access-fg5t5\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.789103 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.789134 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-config-data\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.858350 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.867024 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.870758 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.892307 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.895251 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.895296 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5t5\" (UniqueName: \"kubernetes.io/projected/078c8c6b-0852-41c2-8114-6ec521760afc-kube-api-access-fg5t5\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.895357 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwwjf\" (UniqueName: \"kubernetes.io/projected/6c167402-489f-40ac-ae7e-c57e0ecace2b-kube-api-access-qwwjf\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.895493 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.895519 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-config-data\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.895617 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c167402-489f-40ac-ae7e-c57e0ecace2b-logs\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.895668 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-config-data\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.895981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078c8c6b-0852-41c2-8114-6ec521760afc-logs\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.896963 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078c8c6b-0852-41c2-8114-6ec521760afc-logs\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.931734 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-config-data\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.943252 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.943877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5t5\" (UniqueName: \"kubernetes.io/projected/078c8c6b-0852-41c2-8114-6ec521760afc-kube-api-access-fg5t5\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.967472 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " pod="openstack/nova-api-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.992974 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.998952 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwwjf\" (UniqueName: \"kubernetes.io/projected/6c167402-489f-40ac-ae7e-c57e0ecace2b-kube-api-access-qwwjf\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.999239 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c167402-489f-40ac-ae7e-c57e0ecace2b-logs\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:31 crc kubenswrapper[4792]: I0319 17:09:31.999314 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-config-data\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.001521 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c167402-489f-40ac-ae7e-c57e0ecace2b-logs\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.002316 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.012088 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-config-data\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.014416 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.019469 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.020037 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwwjf\" (UniqueName: \"kubernetes.io/projected/6c167402-489f-40ac-ae7e-c57e0ecace2b-kube-api-access-qwwjf\") pod \"nova-metadata-0\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " pod="openstack/nova-metadata-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.021188 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.024244 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.066060 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.091055 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.118339 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7pz\" (UniqueName: \"kubernetes.io/projected/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-kube-api-access-wf7pz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.118724 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.119115 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.147098 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.223403 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7pz\" (UniqueName: \"kubernetes.io/projected/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-kube-api-access-wf7pz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.223467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.223669 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.234398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.236471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.254508 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-tl8v2"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.257039 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.263723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7pz\" (UniqueName: \"kubernetes.io/projected/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-kube-api-access-wf7pz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.309536 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-tl8v2"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.327826 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.327964 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-config\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.328213 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mp6m\" (UniqueName: \"kubernetes.io/projected/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-kube-api-access-8mp6m\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.328453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-svc\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.328595 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.328630 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.360422 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.362096 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.368200 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.426263 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-config\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432583 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rs4f\" (UniqueName: \"kubernetes.io/projected/14267d7e-df24-4020-80af-00ef78ef1105-kube-api-access-9rs4f\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432627 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mp6m\" (UniqueName: \"kubernetes.io/projected/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-kube-api-access-8mp6m\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432688 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432733 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-svc\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432781 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432890 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.432907 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-config-data\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.433975 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-config\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.434778 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-svc\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.435374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.435985 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.436694 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.437218 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.458802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mp6m\" (UniqueName: \"kubernetes.io/projected/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-kube-api-access-8mp6m\") pod \"dnsmasq-dns-7877d89589-tl8v2\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.612595 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rs4f\" (UniqueName: \"kubernetes.io/projected/14267d7e-df24-4020-80af-00ef78ef1105-kube-api-access-9rs4f\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.612930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.613059 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-config-data\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.633427 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.699730 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rs4f\" (UniqueName: \"kubernetes.io/projected/14267d7e-df24-4020-80af-00ef78ef1105-kube-api-access-9rs4f\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.701673 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.724067 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-config-data\") pod \"nova-scheduler-0\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.806760 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-j4x4v"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.823304 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.842326 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.842942 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gdn5p" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.843140 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.843333 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.872202 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-j4x4v"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.891296 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm4s" event={"ID":"eccea905-8d78-4f68-865f-af56721dbe2d","Type":"ContainerStarted","Data":"2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25"} Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.920277 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mfkfn"] Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.924088 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerStarted","Data":"33d5ea7d532e3b437af7b46fbd783a65b7b60696da07aa34b6c6afb57da05f59"} Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.951804 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-scripts\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.951944 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-config-data\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.967954 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gxh\" (UniqueName: \"kubernetes.io/projected/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-kube-api-access-25gxh\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:32 crc kubenswrapper[4792]: I0319 17:09:32.968012 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-combined-ca-bundle\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.021234 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.091179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-scripts\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.091350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-config-data\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.093181 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25gxh\" (UniqueName: \"kubernetes.io/projected/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-kube-api-access-25gxh\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.093220 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-combined-ca-bundle\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.107406 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-combined-ca-bundle\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.110674 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gxh\" (UniqueName: \"kubernetes.io/projected/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-kube-api-access-25gxh\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.112871 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.123046 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-config-data\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: W0319 17:09:33.133174 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod078c8c6b_0852_41c2_8114_6ec521760afc.slice/crio-53ca95cf4663fd222bcb443b189d2b7678485d06d3313809b29856f8799488a4 WatchSource:0}: Error finding container 53ca95cf4663fd222bcb443b189d2b7678485d06d3313809b29856f8799488a4: Status 404 returned error can't find the container with id 53ca95cf4663fd222bcb443b189d2b7678485d06d3313809b29856f8799488a4 Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.146322 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-scripts\") pod \"aodh-db-sync-j4x4v\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.148938 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.211508 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:33 crc kubenswrapper[4792]: W0319 17:09:33.220507 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c167402_489f_40ac_ae7e_c57e0ecace2b.slice/crio-d17c5729deacbd4b8f8a0fb6a15ca97240fcf2ab72c25648321651ae60a92c04 WatchSource:0}: Error finding container d17c5729deacbd4b8f8a0fb6a15ca97240fcf2ab72c25648321651ae60a92c04: Status 404 returned error can't find the container with id d17c5729deacbd4b8f8a0fb6a15ca97240fcf2ab72c25648321651ae60a92c04 Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.401570 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-tl8v2"] Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.490123 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:09:33 crc kubenswrapper[4792]: W0319 17:09:33.498497 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7219fed_c8eb_4a92_9ff8_176b80d21e7c.slice/crio-17407790afde203215cbe6a7765d45b5baa23e537ae8dec401ecd1454436275a WatchSource:0}: Error finding container 17407790afde203215cbe6a7765d45b5baa23e537ae8dec401ecd1454436275a: Status 404 returned error can't find the container with id 17407790afde203215cbe6a7765d45b5baa23e537ae8dec401ecd1454436275a Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.654428 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvqdf"] Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.657243 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.663231 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.663531 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.689949 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvqdf"] Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.741200 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.741337 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn5lc\" (UniqueName: \"kubernetes.io/projected/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-kube-api-access-kn5lc\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.741590 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-config-data\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.741618 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-scripts\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.836328 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.849151 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-config-data\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.849484 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-scripts\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.849805 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.849975 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn5lc\" (UniqueName: \"kubernetes.io/projected/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-kube-api-access-kn5lc\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.857767 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-config-data\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.860752 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.871992 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-scripts\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.881463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn5lc\" (UniqueName: \"kubernetes.io/projected/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-kube-api-access-kn5lc\") pod \"nova-cell1-conductor-db-sync-vvqdf\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.962803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mfkfn" event={"ID":"41c27def-27bf-4b67-abcf-428ff60a77bd","Type":"ContainerStarted","Data":"e86207d7420fea5580fe6c0e95d73b0f43bd8c149ef25c3dbad618108428a998"} Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.962872 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mfkfn" event={"ID":"41c27def-27bf-4b67-abcf-428ff60a77bd","Type":"ContainerStarted","Data":"1c0745c80461119c6c3967a1bd8e3e275265b52572dac6ceb62842009b0e25ef"} Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.975299 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.987924 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b7219fed-c8eb-4a92-9ff8-176b80d21e7c","Type":"ContainerStarted","Data":"17407790afde203215cbe6a7765d45b5baa23e537ae8dec401ecd1454436275a"} Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.989342 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"078c8c6b-0852-41c2-8114-6ec521760afc","Type":"ContainerStarted","Data":"53ca95cf4663fd222bcb443b189d2b7678485d06d3313809b29856f8799488a4"} Mar 19 17:09:33 crc kubenswrapper[4792]: I0319 17:09:33.991033 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c167402-489f-40ac-ae7e-c57e0ecace2b","Type":"ContainerStarted","Data":"d17c5729deacbd4b8f8a0fb6a15ca97240fcf2ab72c25648321651ae60a92c04"} Mar 19 17:09:34 crc kubenswrapper[4792]: I0319 17:09:34.001221 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mfkfn" podStartSLOduration=3.00119974 podStartE2EDuration="3.00119974s" podCreationTimestamp="2026-03-19 17:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:33.998430314 +0000 UTC m=+1737.144487854" watchObservedRunningTime="2026-03-19 17:09:34.00119974 +0000 UTC m=+1737.147257280" Mar 19 17:09:34 crc kubenswrapper[4792]: I0319 17:09:34.015786 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerStarted","Data":"9eb1d28195f4f70348faf9e08b2a603dd12cec9c9264a91157ccf16a78a7cc07"} Mar 19 17:09:34 crc kubenswrapper[4792]: I0319 17:09:34.017249 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" event={"ID":"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694","Type":"ContainerStarted","Data":"5e004a0bbe43aaa34e89d1f3e3400f09ba06c78e054b2b6355466cc88f7e0755"} Mar 19 17:09:34 crc kubenswrapper[4792]: I0319 17:09:34.020296 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"14267d7e-df24-4020-80af-00ef78ef1105","Type":"ContainerStarted","Data":"9d57d041abcdf0617c2cd6e271d7ae3d0360f7330886ed771740e34199c9fb38"} Mar 19 17:09:34 crc kubenswrapper[4792]: I0319 17:09:34.130991 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-j4x4v"] Mar 19 17:09:34 crc kubenswrapper[4792]: I0319 17:09:34.625191 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvqdf"] Mar 19 17:09:34 crc kubenswrapper[4792]: W0319 17:09:34.628189 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bbd4aa3_ab8f_496f_8c97_d99869f2c91a.slice/crio-d398db5386d443fc27925897ac41ebf2192b737f2e78b5ee5f989cc138f8bfdd WatchSource:0}: Error finding container d398db5386d443fc27925897ac41ebf2192b737f2e78b5ee5f989cc138f8bfdd: Status 404 returned error can't find the container with id d398db5386d443fc27925897ac41ebf2192b737f2e78b5ee5f989cc138f8bfdd Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.057216 4792 generic.go:334] "Generic (PLEG): container finished" podID="eccea905-8d78-4f68-865f-af56721dbe2d" containerID="2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25" exitCode=0 Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.057324 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm4s" event={"ID":"eccea905-8d78-4f68-865f-af56721dbe2d","Type":"ContainerDied","Data":"2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25"} Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.070716 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j4x4v" event={"ID":"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8","Type":"ContainerStarted","Data":"cd5469c2a51debd0b4e093e69615f08a21115512536440b144f6040d3cd5d792"} Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.078422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" event={"ID":"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694","Type":"ContainerDied","Data":"ad2f3b175610f4f85ecc572addd1cf98b6421286437f1290e509d09f40305367"} Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.078106 4792 generic.go:334] "Generic (PLEG): container finished" podID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" containerID="ad2f3b175610f4f85ecc572addd1cf98b6421286437f1290e509d09f40305367" exitCode=0 Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.116900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" event={"ID":"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a","Type":"ContainerStarted","Data":"93201ff122fb160c37abd2ab1cae25945a9f9e171b51e8ce13ba109374c920e7"} Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.116954 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" event={"ID":"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a","Type":"ContainerStarted","Data":"d398db5386d443fc27925897ac41ebf2192b737f2e78b5ee5f989cc138f8bfdd"} Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.189602 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" podStartSLOduration=2.189581023 podStartE2EDuration="2.189581023s" podCreationTimestamp="2026-03-19 17:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:35.146063568 +0000 UTC m=+1738.292121118" watchObservedRunningTime="2026-03-19 17:09:35.189581023 +0000 UTC m=+1738.335638563" Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.651411 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.663019 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:09:35 crc kubenswrapper[4792]: I0319 17:09:35.740503 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:09:35 crc kubenswrapper[4792]: E0319 17:09:35.740907 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:09:38 crc kubenswrapper[4792]: I0319 17:09:38.164188 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" event={"ID":"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694","Type":"ContainerStarted","Data":"ccff9e422e1a9d58bd786a2dd3137e8f3e3b4b668187b2d1fb3dc36948fcb04b"} Mar 19 17:09:38 crc kubenswrapper[4792]: I0319 17:09:38.164573 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:38 crc kubenswrapper[4792]: I0319 17:09:38.193144 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" podStartSLOduration=7.193124054 podStartE2EDuration="7.193124054s" podCreationTimestamp="2026-03-19 17:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:38.181785093 +0000 UTC m=+1741.327842653" watchObservedRunningTime="2026-03-19 17:09:38.193124054 +0000 UTC m=+1741.339181594" Mar 19 17:09:40 crc kubenswrapper[4792]: I0319 17:09:40.811311 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 17:09:41 crc kubenswrapper[4792]: I0319 17:09:41.226797 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm4s" event={"ID":"eccea905-8d78-4f68-865f-af56721dbe2d","Type":"ContainerStarted","Data":"1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a"} Mar 19 17:09:41 crc kubenswrapper[4792]: I0319 17:09:41.267553 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9xm4s" podStartSLOduration=2.097755357 podStartE2EDuration="12.267528622s" podCreationTimestamp="2026-03-19 17:09:29 +0000 UTC" firstStartedPulling="2026-03-19 17:09:30.636026191 +0000 UTC m=+1733.782083731" lastFinishedPulling="2026-03-19 17:09:40.805799456 +0000 UTC m=+1743.951856996" observedRunningTime="2026-03-19 17:09:41.255575053 +0000 UTC m=+1744.401632613" watchObservedRunningTime="2026-03-19 17:09:41.267528622 +0000 UTC m=+1744.413586152" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.242244 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerStarted","Data":"b2de50df3da091c3a148a34a4e1f91826218578c79c663ce87ad1c197e1108f4"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.242571 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.245585 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j4x4v" event={"ID":"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8","Type":"ContainerStarted","Data":"2ee98adfec418dd9ab41ad9d9f01da3b42eb2da6ed01318b9a2d5496b22376b6"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.247954 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"14267d7e-df24-4020-80af-00ef78ef1105","Type":"ContainerStarted","Data":"93a911859bad4b4a33929609d4a981aaa45240752ee191460602f0ba2e10885b"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.249576 4792 generic.go:334] "Generic (PLEG): container finished" podID="41c27def-27bf-4b67-abcf-428ff60a77bd" containerID="e86207d7420fea5580fe6c0e95d73b0f43bd8c149ef25c3dbad618108428a998" exitCode=0 Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.249653 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mfkfn" event={"ID":"41c27def-27bf-4b67-abcf-428ff60a77bd","Type":"ContainerDied","Data":"e86207d7420fea5580fe6c0e95d73b0f43bd8c149ef25c3dbad618108428a998"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.252088 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b7219fed-c8eb-4a92-9ff8-176b80d21e7c","Type":"ContainerStarted","Data":"13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.252188 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b7219fed-c8eb-4a92-9ff8-176b80d21e7c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64" gracePeriod=30 Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.257502 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"078c8c6b-0852-41c2-8114-6ec521760afc","Type":"ContainerStarted","Data":"ed7eab723d9d3120e3283d205e815a7fa0799cd11eff7684cbacda9f4bf80b31"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.257863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"078c8c6b-0852-41c2-8114-6ec521760afc","Type":"ContainerStarted","Data":"9bcea6ae418d6c1fbbb41d4855a4ffb725d3c6bb28e5c74513d1e022b29c12c0"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.272360 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.136039697 podStartE2EDuration="13.272342955s" podCreationTimestamp="2026-03-19 17:09:29 +0000 UTC" firstStartedPulling="2026-03-19 17:09:30.646821936 +0000 UTC m=+1733.792879476" lastFinishedPulling="2026-03-19 17:09:40.783125194 +0000 UTC m=+1743.929182734" observedRunningTime="2026-03-19 17:09:42.260371107 +0000 UTC m=+1745.406428647" watchObservedRunningTime="2026-03-19 17:09:42.272342955 +0000 UTC m=+1745.418400495" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.272892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c167402-489f-40ac-ae7e-c57e0ecace2b","Type":"ContainerStarted","Data":"bf001c92ebca9d26d4da4f8a16f1874cbf872c9e05570eeea7f3c97726164ae7"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.272933 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c167402-489f-40ac-ae7e-c57e0ecace2b","Type":"ContainerStarted","Data":"3ef20cd5a09aefbc5dfa832d5beb5b8ceadbe67d83194b67e1c6a41f3dd3cab7"} Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.273000 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerName="nova-metadata-log" containerID="cri-o://3ef20cd5a09aefbc5dfa832d5beb5b8ceadbe67d83194b67e1c6a41f3dd3cab7" gracePeriod=30 Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.273033 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerName="nova-metadata-metadata" containerID="cri-o://bf001c92ebca9d26d4da4f8a16f1874cbf872c9e05570eeea7f3c97726164ae7" gracePeriod=30 Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.311580 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.782798795 podStartE2EDuration="11.311562052s" podCreationTimestamp="2026-03-19 17:09:31 +0000 UTC" firstStartedPulling="2026-03-19 17:09:33.196616532 +0000 UTC m=+1736.342674072" lastFinishedPulling="2026-03-19 17:09:40.725379789 +0000 UTC m=+1743.871437329" observedRunningTime="2026-03-19 17:09:42.303337427 +0000 UTC m=+1745.449394967" watchObservedRunningTime="2026-03-19 17:09:42.311562052 +0000 UTC m=+1745.457619602" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.325345 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.12279086 podStartE2EDuration="11.32532709s" podCreationTimestamp="2026-03-19 17:09:31 +0000 UTC" firstStartedPulling="2026-03-19 17:09:33.519067355 +0000 UTC m=+1736.665124895" lastFinishedPulling="2026-03-19 17:09:40.721603585 +0000 UTC m=+1743.867661125" observedRunningTime="2026-03-19 17:09:42.322128162 +0000 UTC m=+1745.468185702" watchObservedRunningTime="2026-03-19 17:09:42.32532709 +0000 UTC m=+1745.471384630" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.350557 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.48239399 podStartE2EDuration="11.350533202s" podCreationTimestamp="2026-03-19 17:09:31 +0000 UTC" firstStartedPulling="2026-03-19 17:09:33.846008089 +0000 UTC m=+1736.992065629" lastFinishedPulling="2026-03-19 17:09:40.714147301 +0000 UTC m=+1743.860204841" observedRunningTime="2026-03-19 17:09:42.342409008 +0000 UTC m=+1745.488466548" watchObservedRunningTime="2026-03-19 17:09:42.350533202 +0000 UTC m=+1745.496590742" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.364571 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-j4x4v" podStartSLOduration=3.696483957 podStartE2EDuration="10.364551766s" podCreationTimestamp="2026-03-19 17:09:32 +0000 UTC" firstStartedPulling="2026-03-19 17:09:34.140100723 +0000 UTC m=+1737.286158263" lastFinishedPulling="2026-03-19 17:09:40.808168532 +0000 UTC m=+1743.954226072" observedRunningTime="2026-03-19 17:09:42.362036887 +0000 UTC m=+1745.508094427" watchObservedRunningTime="2026-03-19 17:09:42.364551766 +0000 UTC m=+1745.510609306" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.379801 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.908906238 podStartE2EDuration="11.379779515s" podCreationTimestamp="2026-03-19 17:09:31 +0000 UTC" firstStartedPulling="2026-03-19 17:09:33.317080869 +0000 UTC m=+1736.463138409" lastFinishedPulling="2026-03-19 17:09:40.787954146 +0000 UTC m=+1743.934011686" observedRunningTime="2026-03-19 17:09:42.377818091 +0000 UTC m=+1745.523875641" watchObservedRunningTime="2026-03-19 17:09:42.379779515 +0000 UTC m=+1745.525837055" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.438724 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.636226 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.710134 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-f2c6n"] Mar 19 17:09:42 crc kubenswrapper[4792]: I0319 17:09:42.710405 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" podUID="29b28abf-0997-4ee6-a514-eb15f9955657" containerName="dnsmasq-dns" containerID="cri-o://2ecbeefda1f62f3c9a391810e96fef46370a3e881c9a7d6b06ef9b9f36cc20bc" gracePeriod=10 Mar 19 17:09:42 crc kubenswrapper[4792]: E0319 17:09:42.887154 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29b28abf_0997_4ee6_a514_eb15f9955657.slice/crio-conmon-2ecbeefda1f62f3c9a391810e96fef46370a3e881c9a7d6b06ef9b9f36cc20bc.scope\": RecentStats: unable to find data in memory cache]" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.025101 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.025182 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.068918 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.328700 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerID="bf001c92ebca9d26d4da4f8a16f1874cbf872c9e05570eeea7f3c97726164ae7" exitCode=0 Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.328774 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerID="3ef20cd5a09aefbc5dfa832d5beb5b8ceadbe67d83194b67e1c6a41f3dd3cab7" exitCode=143 Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.328755 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c167402-489f-40ac-ae7e-c57e0ecace2b","Type":"ContainerDied","Data":"bf001c92ebca9d26d4da4f8a16f1874cbf872c9e05570eeea7f3c97726164ae7"} Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.328897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c167402-489f-40ac-ae7e-c57e0ecace2b","Type":"ContainerDied","Data":"3ef20cd5a09aefbc5dfa832d5beb5b8ceadbe67d83194b67e1c6a41f3dd3cab7"} Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.332078 4792 generic.go:334] "Generic (PLEG): container finished" podID="29b28abf-0997-4ee6-a514-eb15f9955657" containerID="2ecbeefda1f62f3c9a391810e96fef46370a3e881c9a7d6b06ef9b9f36cc20bc" exitCode=0 Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.332153 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" event={"ID":"29b28abf-0997-4ee6-a514-eb15f9955657","Type":"ContainerDied","Data":"2ecbeefda1f62f3c9a391810e96fef46370a3e881c9a7d6b06ef9b9f36cc20bc"} Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.376057 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.598029 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.608581 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.714420 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-svc\") pod \"29b28abf-0997-4ee6-a514-eb15f9955657\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.714476 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-nb\") pod \"29b28abf-0997-4ee6-a514-eb15f9955657\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.714518 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8h5c\" (UniqueName: \"kubernetes.io/projected/29b28abf-0997-4ee6-a514-eb15f9955657-kube-api-access-w8h5c\") pod \"29b28abf-0997-4ee6-a514-eb15f9955657\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.714570 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-swift-storage-0\") pod \"29b28abf-0997-4ee6-a514-eb15f9955657\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.714652 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwwjf\" (UniqueName: \"kubernetes.io/projected/6c167402-489f-40ac-ae7e-c57e0ecace2b-kube-api-access-qwwjf\") pod \"6c167402-489f-40ac-ae7e-c57e0ecace2b\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.714769 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-config\") pod \"29b28abf-0997-4ee6-a514-eb15f9955657\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.715009 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c167402-489f-40ac-ae7e-c57e0ecace2b-logs\") pod \"6c167402-489f-40ac-ae7e-c57e0ecace2b\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.715053 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-config-data\") pod \"6c167402-489f-40ac-ae7e-c57e0ecace2b\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.715155 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-combined-ca-bundle\") pod \"6c167402-489f-40ac-ae7e-c57e0ecace2b\" (UID: \"6c167402-489f-40ac-ae7e-c57e0ecace2b\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.715256 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-sb\") pod \"29b28abf-0997-4ee6-a514-eb15f9955657\" (UID: \"29b28abf-0997-4ee6-a514-eb15f9955657\") " Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.726069 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c167402-489f-40ac-ae7e-c57e0ecace2b-kube-api-access-qwwjf" (OuterVolumeSpecName: "kube-api-access-qwwjf") pod "6c167402-489f-40ac-ae7e-c57e0ecace2b" (UID: "6c167402-489f-40ac-ae7e-c57e0ecace2b"). InnerVolumeSpecName "kube-api-access-qwwjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.735058 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b28abf-0997-4ee6-a514-eb15f9955657-kube-api-access-w8h5c" (OuterVolumeSpecName: "kube-api-access-w8h5c") pod "29b28abf-0997-4ee6-a514-eb15f9955657" (UID: "29b28abf-0997-4ee6-a514-eb15f9955657"). InnerVolumeSpecName "kube-api-access-w8h5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.755311 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c167402-489f-40ac-ae7e-c57e0ecace2b-logs" (OuterVolumeSpecName: "logs") pod "6c167402-489f-40ac-ae7e-c57e0ecace2b" (UID: "6c167402-489f-40ac-ae7e-c57e0ecace2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.818982 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8h5c\" (UniqueName: \"kubernetes.io/projected/29b28abf-0997-4ee6-a514-eb15f9955657-kube-api-access-w8h5c\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.819003 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwwjf\" (UniqueName: \"kubernetes.io/projected/6c167402-489f-40ac-ae7e-c57e0ecace2b-kube-api-access-qwwjf\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.819011 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c167402-489f-40ac-ae7e-c57e0ecace2b-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.956272 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-config" (OuterVolumeSpecName: "config") pod "29b28abf-0997-4ee6-a514-eb15f9955657" (UID: "29b28abf-0997-4ee6-a514-eb15f9955657"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.956886 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c167402-489f-40ac-ae7e-c57e0ecace2b" (UID: "6c167402-489f-40ac-ae7e-c57e0ecace2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:43 crc kubenswrapper[4792]: I0319 17:09:43.994520 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29b28abf-0997-4ee6-a514-eb15f9955657" (UID: "29b28abf-0997-4ee6-a514-eb15f9955657"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.000559 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29b28abf-0997-4ee6-a514-eb15f9955657" (UID: "29b28abf-0997-4ee6-a514-eb15f9955657"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.012600 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29b28abf-0997-4ee6-a514-eb15f9955657" (UID: "29b28abf-0997-4ee6-a514-eb15f9955657"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.013455 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29b28abf-0997-4ee6-a514-eb15f9955657" (UID: "29b28abf-0997-4ee6-a514-eb15f9955657"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.024305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-config-data" (OuterVolumeSpecName: "config-data") pod "6c167402-489f-40ac-ae7e-c57e0ecace2b" (UID: "6c167402-489f-40ac-ae7e-c57e0ecace2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.047637 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.047674 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.047684 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.047694 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c167402-489f-40ac-ae7e-c57e0ecace2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.047703 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.047710 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.047718 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b28abf-0997-4ee6-a514-eb15f9955657-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.155565 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.343980 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" event={"ID":"29b28abf-0997-4ee6-a514-eb15f9955657","Type":"ContainerDied","Data":"b7cccfad2b7b8ba585c9070d306b7b19d4aa8eade543d885da91d17e1a24c029"} Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.344070 4792 scope.go:117] "RemoveContainer" containerID="2ecbeefda1f62f3c9a391810e96fef46370a3e881c9a7d6b06ef9b9f36cc20bc" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.346060 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-f2c6n" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.346701 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c167402-489f-40ac-ae7e-c57e0ecace2b","Type":"ContainerDied","Data":"d17c5729deacbd4b8f8a0fb6a15ca97240fcf2ab72c25648321651ae60a92c04"} Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.346738 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.354061 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcpsw\" (UniqueName: \"kubernetes.io/projected/41c27def-27bf-4b67-abcf-428ff60a77bd-kube-api-access-lcpsw\") pod \"41c27def-27bf-4b67-abcf-428ff60a77bd\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.354100 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-scripts\") pod \"41c27def-27bf-4b67-abcf-428ff60a77bd\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.354294 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-config-data\") pod \"41c27def-27bf-4b67-abcf-428ff60a77bd\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.354357 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-combined-ca-bundle\") pod \"41c27def-27bf-4b67-abcf-428ff60a77bd\" (UID: \"41c27def-27bf-4b67-abcf-428ff60a77bd\") " Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.356627 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mfkfn" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.356663 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mfkfn" event={"ID":"41c27def-27bf-4b67-abcf-428ff60a77bd","Type":"ContainerDied","Data":"1c0745c80461119c6c3967a1bd8e3e275265b52572dac6ceb62842009b0e25ef"} Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.356692 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0745c80461119c6c3967a1bd8e3e275265b52572dac6ceb62842009b0e25ef" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.359370 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-scripts" (OuterVolumeSpecName: "scripts") pod "41c27def-27bf-4b67-abcf-428ff60a77bd" (UID: "41c27def-27bf-4b67-abcf-428ff60a77bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.366146 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c27def-27bf-4b67-abcf-428ff60a77bd-kube-api-access-lcpsw" (OuterVolumeSpecName: "kube-api-access-lcpsw") pod "41c27def-27bf-4b67-abcf-428ff60a77bd" (UID: "41c27def-27bf-4b67-abcf-428ff60a77bd"). InnerVolumeSpecName "kube-api-access-lcpsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.407220 4792 scope.go:117] "RemoveContainer" containerID="227613be2a88466e43d65a676e693897919aa93283aace6724d4afa3f32b16f7" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.442011 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c27def-27bf-4b67-abcf-428ff60a77bd" (UID: "41c27def-27bf-4b67-abcf-428ff60a77bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.448096 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-config-data" (OuterVolumeSpecName: "config-data") pod "41c27def-27bf-4b67-abcf-428ff60a77bd" (UID: "41c27def-27bf-4b67-abcf-428ff60a77bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.457511 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.457544 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcpsw\" (UniqueName: \"kubernetes.io/projected/41c27def-27bf-4b67-abcf-428ff60a77bd-kube-api-access-lcpsw\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.457555 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.457563 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c27def-27bf-4b67-abcf-428ff60a77bd-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.461908 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-f2c6n"] Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.472115 4792 scope.go:117] "RemoveContainer" containerID="bf001c92ebca9d26d4da4f8a16f1874cbf872c9e05570eeea7f3c97726164ae7" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.495471 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-f2c6n"] Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.533351 4792 scope.go:117] "RemoveContainer" containerID="3ef20cd5a09aefbc5dfa832d5beb5b8ceadbe67d83194b67e1c6a41f3dd3cab7" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.533489 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.541947 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.553511 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:44 crc kubenswrapper[4792]: E0319 17:09:44.554069 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerName="nova-metadata-metadata" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554086 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerName="nova-metadata-metadata" Mar 19 17:09:44 crc kubenswrapper[4792]: E0319 17:09:44.554100 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerName="nova-metadata-log" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554108 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerName="nova-metadata-log" Mar 19 17:09:44 crc kubenswrapper[4792]: E0319 17:09:44.554148 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c27def-27bf-4b67-abcf-428ff60a77bd" containerName="nova-manage" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554157 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c27def-27bf-4b67-abcf-428ff60a77bd" containerName="nova-manage" Mar 19 17:09:44 crc kubenswrapper[4792]: E0319 17:09:44.554166 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b28abf-0997-4ee6-a514-eb15f9955657" containerName="init" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554173 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b28abf-0997-4ee6-a514-eb15f9955657" containerName="init" Mar 19 17:09:44 crc kubenswrapper[4792]: E0319 17:09:44.554191 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b28abf-0997-4ee6-a514-eb15f9955657" containerName="dnsmasq-dns" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554199 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b28abf-0997-4ee6-a514-eb15f9955657" containerName="dnsmasq-dns" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554559 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerName="nova-metadata-log" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554589 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b28abf-0997-4ee6-a514-eb15f9955657" containerName="dnsmasq-dns" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554610 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c27def-27bf-4b67-abcf-428ff60a77bd" containerName="nova-manage" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.554640 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" containerName="nova-metadata-metadata" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.556503 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.559129 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.559277 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.563924 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-config-data\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.563977 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.564120 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-logs\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.564280 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.564350 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8f7\" (UniqueName: \"kubernetes.io/projected/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-kube-api-access-pr8f7\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.566396 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.566682 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" containerName="nova-api-log" containerID="cri-o://9bcea6ae418d6c1fbbb41d4855a4ffb725d3c6bb28e5c74513d1e022b29c12c0" gracePeriod=30 Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.567140 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" containerName="nova-api-api" containerID="cri-o://ed7eab723d9d3120e3283d205e815a7fa0799cd11eff7684cbacda9f4bf80b31" gracePeriod=30 Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.580669 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.593914 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:44 crc kubenswrapper[4792]: E0319 17:09:44.594911 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-pr8f7 logs nova-metadata-tls-certs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-metadata-0" podUID="3352a79f-50c0-457b-b7fd-b3cc1e51ff50" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.658411 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.666975 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.667062 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8f7\" (UniqueName: \"kubernetes.io/projected/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-kube-api-access-pr8f7\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.667130 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-config-data\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.667163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.667191 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-logs\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.667604 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-logs\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.672913 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-config-data\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.673111 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.675110 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:44 crc kubenswrapper[4792]: I0319 17:09:44.690493 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8f7\" (UniqueName: \"kubernetes.io/projected/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-kube-api-access-pr8f7\") pod \"nova-metadata-0\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " pod="openstack/nova-metadata-0" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.380430 4792 generic.go:334] "Generic (PLEG): container finished" podID="078c8c6b-0852-41c2-8114-6ec521760afc" containerID="ed7eab723d9d3120e3283d205e815a7fa0799cd11eff7684cbacda9f4bf80b31" exitCode=0 Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.380866 4792 generic.go:334] "Generic (PLEG): container finished" podID="078c8c6b-0852-41c2-8114-6ec521760afc" containerID="9bcea6ae418d6c1fbbb41d4855a4ffb725d3c6bb28e5c74513d1e022b29c12c0" exitCode=143 Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.380498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"078c8c6b-0852-41c2-8114-6ec521760afc","Type":"ContainerDied","Data":"ed7eab723d9d3120e3283d205e815a7fa0799cd11eff7684cbacda9f4bf80b31"} Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.380966 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"078c8c6b-0852-41c2-8114-6ec521760afc","Type":"ContainerDied","Data":"9bcea6ae418d6c1fbbb41d4855a4ffb725d3c6bb28e5c74513d1e022b29c12c0"} Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.397268 4792 generic.go:334] "Generic (PLEG): container finished" podID="76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" containerID="2ee98adfec418dd9ab41ad9d9f01da3b42eb2da6ed01318b9a2d5496b22376b6" exitCode=0 Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.397636 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j4x4v" event={"ID":"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8","Type":"ContainerDied","Data":"2ee98adfec418dd9ab41ad9d9f01da3b42eb2da6ed01318b9a2d5496b22376b6"} Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.401821 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" event={"ID":"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a","Type":"ContainerDied","Data":"93201ff122fb160c37abd2ab1cae25945a9f9e171b51e8ce13ba109374c920e7"} Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.401278 4792 generic.go:334] "Generic (PLEG): container finished" podID="2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" containerID="93201ff122fb160c37abd2ab1cae25945a9f9e171b51e8ce13ba109374c920e7" exitCode=0 Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.403255 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.445349 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.586612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-combined-ca-bundle\") pod \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.586969 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr8f7\" (UniqueName: \"kubernetes.io/projected/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-kube-api-access-pr8f7\") pod \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.587026 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-logs\") pod \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.587502 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-logs" (OuterVolumeSpecName: "logs") pod "3352a79f-50c0-457b-b7fd-b3cc1e51ff50" (UID: "3352a79f-50c0-457b-b7fd-b3cc1e51ff50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.587935 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-config-data\") pod \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.588069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-nova-metadata-tls-certs\") pod \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\" (UID: \"3352a79f-50c0-457b-b7fd-b3cc1e51ff50\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.589002 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.593750 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3352a79f-50c0-457b-b7fd-b3cc1e51ff50" (UID: "3352a79f-50c0-457b-b7fd-b3cc1e51ff50"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.594355 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-kube-api-access-pr8f7" (OuterVolumeSpecName: "kube-api-access-pr8f7") pod "3352a79f-50c0-457b-b7fd-b3cc1e51ff50" (UID: "3352a79f-50c0-457b-b7fd-b3cc1e51ff50"). InnerVolumeSpecName "kube-api-access-pr8f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.594351 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-config-data" (OuterVolumeSpecName: "config-data") pod "3352a79f-50c0-457b-b7fd-b3cc1e51ff50" (UID: "3352a79f-50c0-457b-b7fd-b3cc1e51ff50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.596423 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3352a79f-50c0-457b-b7fd-b3cc1e51ff50" (UID: "3352a79f-50c0-457b-b7fd-b3cc1e51ff50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.599699 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.691198 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.691233 4792 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.691248 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.691259 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr8f7\" (UniqueName: \"kubernetes.io/projected/3352a79f-50c0-457b-b7fd-b3cc1e51ff50-kube-api-access-pr8f7\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.753890 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b28abf-0997-4ee6-a514-eb15f9955657" path="/var/lib/kubelet/pods/29b28abf-0997-4ee6-a514-eb15f9955657/volumes" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.754559 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c167402-489f-40ac-ae7e-c57e0ecace2b" path="/var/lib/kubelet/pods/6c167402-489f-40ac-ae7e-c57e0ecace2b/volumes" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.792942 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078c8c6b-0852-41c2-8114-6ec521760afc-logs\") pod \"078c8c6b-0852-41c2-8114-6ec521760afc\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.793015 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg5t5\" (UniqueName: \"kubernetes.io/projected/078c8c6b-0852-41c2-8114-6ec521760afc-kube-api-access-fg5t5\") pod \"078c8c6b-0852-41c2-8114-6ec521760afc\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.793158 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-config-data\") pod \"078c8c6b-0852-41c2-8114-6ec521760afc\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.793409 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-combined-ca-bundle\") pod \"078c8c6b-0852-41c2-8114-6ec521760afc\" (UID: \"078c8c6b-0852-41c2-8114-6ec521760afc\") " Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.793484 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078c8c6b-0852-41c2-8114-6ec521760afc-logs" (OuterVolumeSpecName: "logs") pod "078c8c6b-0852-41c2-8114-6ec521760afc" (UID: "078c8c6b-0852-41c2-8114-6ec521760afc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.794153 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078c8c6b-0852-41c2-8114-6ec521760afc-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.802901 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078c8c6b-0852-41c2-8114-6ec521760afc-kube-api-access-fg5t5" (OuterVolumeSpecName: "kube-api-access-fg5t5") pod "078c8c6b-0852-41c2-8114-6ec521760afc" (UID: "078c8c6b-0852-41c2-8114-6ec521760afc"). InnerVolumeSpecName "kube-api-access-fg5t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.828808 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "078c8c6b-0852-41c2-8114-6ec521760afc" (UID: "078c8c6b-0852-41c2-8114-6ec521760afc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.829606 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-config-data" (OuterVolumeSpecName: "config-data") pod "078c8c6b-0852-41c2-8114-6ec521760afc" (UID: "078c8c6b-0852-41c2-8114-6ec521760afc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.897262 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg5t5\" (UniqueName: \"kubernetes.io/projected/078c8c6b-0852-41c2-8114-6ec521760afc-kube-api-access-fg5t5\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.897303 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:45 crc kubenswrapper[4792]: I0319 17:09:45.897315 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078c8c6b-0852-41c2-8114-6ec521760afc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.415135 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="14267d7e-df24-4020-80af-00ef78ef1105" containerName="nova-scheduler-scheduler" containerID="cri-o://93a911859bad4b4a33929609d4a981aaa45240752ee191460602f0ba2e10885b" gracePeriod=30 Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.415648 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.415741 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.415781 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"078c8c6b-0852-41c2-8114-6ec521760afc","Type":"ContainerDied","Data":"53ca95cf4663fd222bcb443b189d2b7678485d06d3313809b29856f8799488a4"} Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.415818 4792 scope.go:117] "RemoveContainer" containerID="ed7eab723d9d3120e3283d205e815a7fa0799cd11eff7684cbacda9f4bf80b31" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.473515 4792 scope.go:117] "RemoveContainer" containerID="9bcea6ae418d6c1fbbb41d4855a4ffb725d3c6bb28e5c74513d1e022b29c12c0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.479316 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.540473 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.551130 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:46 crc kubenswrapper[4792]: E0319 17:09:46.551702 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" containerName="nova-api-log" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.551722 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" containerName="nova-api-log" Mar 19 17:09:46 crc kubenswrapper[4792]: E0319 17:09:46.551734 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" containerName="nova-api-api" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.551741 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" containerName="nova-api-api" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.551991 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" containerName="nova-api-log" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.552015 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" containerName="nova-api-api" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.553257 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.558053 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.565965 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.598779 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.616466 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.631971 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.634453 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.636605 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.637072 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.662083 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.717142 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-config-data\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.717333 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e964f368-62cf-4886-a9be-d8536db1ee92-logs\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.717669 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.717700 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xmt\" (UniqueName: \"kubernetes.io/projected/e964f368-62cf-4886-a9be-d8536db1ee92-kube-api-access-r8xmt\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.819897 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.819953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.819971 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xmt\" (UniqueName: \"kubernetes.io/projected/e964f368-62cf-4886-a9be-d8536db1ee92-kube-api-access-r8xmt\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.820024 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.820133 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-config-data\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.820162 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-logs\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.820200 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-config-data\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.820245 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5b4c\" (UniqueName: \"kubernetes.io/projected/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-kube-api-access-j5b4c\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.820267 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e964f368-62cf-4886-a9be-d8536db1ee92-logs\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.820693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e964f368-62cf-4886-a9be-d8536db1ee92-logs\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.825814 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.825970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-config-data\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.855814 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xmt\" (UniqueName: \"kubernetes.io/projected/e964f368-62cf-4886-a9be-d8536db1ee92-kube-api-access-r8xmt\") pod \"nova-api-0\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.878348 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.922644 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-config-data\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.922714 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-logs\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.922785 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5b4c\" (UniqueName: \"kubernetes.io/projected/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-kube-api-access-j5b4c\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.922967 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.923074 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.923324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-logs\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.927550 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.928013 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.936367 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-config-data\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.939830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5b4c\" (UniqueName: \"kubernetes.io/projected/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-kube-api-access-j5b4c\") pod \"nova-metadata-0\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " pod="openstack/nova-metadata-0" Mar 19 17:09:46 crc kubenswrapper[4792]: I0319 17:09:46.955824 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.187120 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.206512 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.333713 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25gxh\" (UniqueName: \"kubernetes.io/projected/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-kube-api-access-25gxh\") pod \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.333795 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn5lc\" (UniqueName: \"kubernetes.io/projected/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-kube-api-access-kn5lc\") pod \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.333870 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-combined-ca-bundle\") pod \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.333908 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-scripts\") pod \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.333994 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-scripts\") pod \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.334031 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-combined-ca-bundle\") pod \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.334050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-config-data\") pod \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\" (UID: \"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.334132 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-config-data\") pod \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\" (UID: \"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.347084 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-scripts" (OuterVolumeSpecName: "scripts") pod "76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" (UID: "76c2a4ef-0756-47fd-a30e-1af46f2d5bc8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.347176 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-kube-api-access-kn5lc" (OuterVolumeSpecName: "kube-api-access-kn5lc") pod "2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" (UID: "2bbd4aa3-ab8f-496f-8c97-d99869f2c91a"). InnerVolumeSpecName "kube-api-access-kn5lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.347713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-kube-api-access-25gxh" (OuterVolumeSpecName: "kube-api-access-25gxh") pod "76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" (UID: "76c2a4ef-0756-47fd-a30e-1af46f2d5bc8"). InnerVolumeSpecName "kube-api-access-25gxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.359372 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.362826 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-scripts" (OuterVolumeSpecName: "scripts") pod "2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" (UID: "2bbd4aa3-ab8f-496f-8c97-d99869f2c91a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.372019 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-config-data" (OuterVolumeSpecName: "config-data") pod "76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" (UID: "76c2a4ef-0756-47fd-a30e-1af46f2d5bc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.389549 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-config-data" (OuterVolumeSpecName: "config-data") pod "2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" (UID: "2bbd4aa3-ab8f-496f-8c97-d99869f2c91a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.392263 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" (UID: "2bbd4aa3-ab8f-496f-8c97-d99869f2c91a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.392729 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" (UID: "76c2a4ef-0756-47fd-a30e-1af46f2d5bc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.432303 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e964f368-62cf-4886-a9be-d8536db1ee92","Type":"ContainerStarted","Data":"dc50caaef981856ead06bed429331c6955c3eb1fa02a8368fc6bde4b4f771db9"} Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.448769 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25gxh\" (UniqueName: \"kubernetes.io/projected/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-kube-api-access-25gxh\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.449343 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn5lc\" (UniqueName: \"kubernetes.io/projected/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-kube-api-access-kn5lc\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.449365 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.449378 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.449410 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.449421 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.449431 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.449443 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.450042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j4x4v" event={"ID":"76c2a4ef-0756-47fd-a30e-1af46f2d5bc8","Type":"ContainerDied","Data":"cd5469c2a51debd0b4e093e69615f08a21115512536440b144f6040d3cd5d792"} Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.450079 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd5469c2a51debd0b4e093e69615f08a21115512536440b144f6040d3cd5d792" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.450136 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j4x4v" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.453977 4792 generic.go:334] "Generic (PLEG): container finished" podID="14267d7e-df24-4020-80af-00ef78ef1105" containerID="93a911859bad4b4a33929609d4a981aaa45240752ee191460602f0ba2e10885b" exitCode=0 Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.454046 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"14267d7e-df24-4020-80af-00ef78ef1105","Type":"ContainerDied","Data":"93a911859bad4b4a33929609d4a981aaa45240752ee191460602f0ba2e10885b"} Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.461354 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.461344 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vvqdf" event={"ID":"2bbd4aa3-ab8f-496f-8c97-d99869f2c91a","Type":"ContainerDied","Data":"d398db5386d443fc27925897ac41ebf2192b737f2e78b5ee5f989cc138f8bfdd"} Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.461426 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d398db5386d443fc27925897ac41ebf2192b737f2e78b5ee5f989cc138f8bfdd" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.549771 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.554655 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 17:09:47 crc kubenswrapper[4792]: E0319 17:09:47.555203 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" containerName="aodh-db-sync" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.555231 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" containerName="aodh-db-sync" Mar 19 17:09:47 crc kubenswrapper[4792]: E0319 17:09:47.555270 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14267d7e-df24-4020-80af-00ef78ef1105" containerName="nova-scheduler-scheduler" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.555280 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="14267d7e-df24-4020-80af-00ef78ef1105" containerName="nova-scheduler-scheduler" Mar 19 17:09:47 crc kubenswrapper[4792]: E0319 17:09:47.555308 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" containerName="nova-cell1-conductor-db-sync" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.555316 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" containerName="nova-cell1-conductor-db-sync" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.555598 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" containerName="aodh-db-sync" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.555616 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" containerName="nova-cell1-conductor-db-sync" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.555642 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="14267d7e-df24-4020-80af-00ef78ef1105" containerName="nova-scheduler-scheduler" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.557271 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.561432 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.584061 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.644383 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.653812 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-combined-ca-bundle\") pod \"14267d7e-df24-4020-80af-00ef78ef1105\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.653965 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rs4f\" (UniqueName: \"kubernetes.io/projected/14267d7e-df24-4020-80af-00ef78ef1105-kube-api-access-9rs4f\") pod \"14267d7e-df24-4020-80af-00ef78ef1105\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.654013 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-config-data\") pod \"14267d7e-df24-4020-80af-00ef78ef1105\" (UID: \"14267d7e-df24-4020-80af-00ef78ef1105\") " Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.654233 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cdl7\" (UniqueName: \"kubernetes.io/projected/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-kube-api-access-6cdl7\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.654303 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.654456 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.658724 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14267d7e-df24-4020-80af-00ef78ef1105-kube-api-access-9rs4f" (OuterVolumeSpecName: "kube-api-access-9rs4f") pod "14267d7e-df24-4020-80af-00ef78ef1105" (UID: "14267d7e-df24-4020-80af-00ef78ef1105"). InnerVolumeSpecName "kube-api-access-9rs4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.691656 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14267d7e-df24-4020-80af-00ef78ef1105" (UID: "14267d7e-df24-4020-80af-00ef78ef1105"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.700141 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-config-data" (OuterVolumeSpecName: "config-data") pod "14267d7e-df24-4020-80af-00ef78ef1105" (UID: "14267d7e-df24-4020-80af-00ef78ef1105"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.756111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.756238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cdl7\" (UniqueName: \"kubernetes.io/projected/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-kube-api-access-6cdl7\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.756320 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.756496 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.756520 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rs4f\" (UniqueName: \"kubernetes.io/projected/14267d7e-df24-4020-80af-00ef78ef1105-kube-api-access-9rs4f\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.756535 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14267d7e-df24-4020-80af-00ef78ef1105-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.757274 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078c8c6b-0852-41c2-8114-6ec521760afc" path="/var/lib/kubelet/pods/078c8c6b-0852-41c2-8114-6ec521760afc/volumes" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.758380 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3352a79f-50c0-457b-b7fd-b3cc1e51ff50" path="/var/lib/kubelet/pods/3352a79f-50c0-457b-b7fd-b3cc1e51ff50/volumes" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.759693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.760390 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.781470 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cdl7\" (UniqueName: \"kubernetes.io/projected/1934f0b5-96a5-41e7-8b10-f06f65ec46e1-kube-api-access-6cdl7\") pod \"nova-cell1-conductor-0\" (UID: \"1934f0b5-96a5-41e7-8b10-f06f65ec46e1\") " pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:47 crc kubenswrapper[4792]: I0319 17:09:47.883784 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.398987 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.503610 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4","Type":"ContainerStarted","Data":"98c6d049a8c997ad0114db655559f053f2cdb78e59de8e1a77127e87b29df356"} Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.503651 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4","Type":"ContainerStarted","Data":"bde8d4cb05f66c4e8fe49ae1e4c2c0325aa27edc0acddb72d8cbfedff8947fa5"} Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.503660 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4","Type":"ContainerStarted","Data":"2bd54773754edd15ad62601fad0ac3b5e5a00e2214794959d774d85bf9a74860"} Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.508614 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"14267d7e-df24-4020-80af-00ef78ef1105","Type":"ContainerDied","Data":"9d57d041abcdf0617c2cd6e271d7ae3d0360f7330886ed771740e34199c9fb38"} Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.508670 4792 scope.go:117] "RemoveContainer" containerID="93a911859bad4b4a33929609d4a981aaa45240752ee191460602f0ba2e10885b" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.508813 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.529227 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e964f368-62cf-4886-a9be-d8536db1ee92","Type":"ContainerStarted","Data":"355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b"} Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.529444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e964f368-62cf-4886-a9be-d8536db1ee92","Type":"ContainerStarted","Data":"755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb"} Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.537794 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1934f0b5-96a5-41e7-8b10-f06f65ec46e1","Type":"ContainerStarted","Data":"ec040e934ea2c6bd7a520e1c9c7da38bfa9881a2ccb9de97f365cc9b6c20e97f"} Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.541981 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.541959726 podStartE2EDuration="2.541959726s" podCreationTimestamp="2026-03-19 17:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:48.523769067 +0000 UTC m=+1751.669826607" watchObservedRunningTime="2026-03-19 17:09:48.541959726 +0000 UTC m=+1751.688017266" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.554831 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.554809108 podStartE2EDuration="2.554809108s" podCreationTimestamp="2026-03-19 17:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:48.552112174 +0000 UTC m=+1751.698169714" watchObservedRunningTime="2026-03-19 17:09:48.554809108 +0000 UTC m=+1751.700866648" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.621868 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.639930 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.655562 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.657283 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.660042 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.671999 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.792612 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b29rb\" (UniqueName: \"kubernetes.io/projected/0bd5da2b-f2ec-4313-a738-63373d968a78-kube-api-access-b29rb\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.793366 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-config-data\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.794008 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.895753 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-config-data\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.895957 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.896114 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b29rb\" (UniqueName: \"kubernetes.io/projected/0bd5da2b-f2ec-4313-a738-63373d968a78-kube-api-access-b29rb\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.900296 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.907953 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-config-data\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.913351 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b29rb\" (UniqueName: \"kubernetes.io/projected/0bd5da2b-f2ec-4313-a738-63373d968a78-kube-api-access-b29rb\") pod \"nova-scheduler-0\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " pod="openstack/nova-scheduler-0" Mar 19 17:09:48 crc kubenswrapper[4792]: I0319 17:09:48.986076 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:09:49 crc kubenswrapper[4792]: W0319 17:09:49.504948 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd5da2b_f2ec_4313_a738_63373d968a78.slice/crio-b659793e0c6c126951a95a4877365d87ea079712c2068693b3d9f693d492f940 WatchSource:0}: Error finding container b659793e0c6c126951a95a4877365d87ea079712c2068693b3d9f693d492f940: Status 404 returned error can't find the container with id b659793e0c6c126951a95a4877365d87ea079712c2068693b3d9f693d492f940 Mar 19 17:09:49 crc kubenswrapper[4792]: I0319 17:09:49.505145 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:09:49 crc kubenswrapper[4792]: I0319 17:09:49.564253 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1934f0b5-96a5-41e7-8b10-f06f65ec46e1","Type":"ContainerStarted","Data":"1d596b7357a300e7094c4c810ad5a44cc1c46e114054a5b2eb54774f8be04ca2"} Mar 19 17:09:49 crc kubenswrapper[4792]: I0319 17:09:49.564366 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:49 crc kubenswrapper[4792]: I0319 17:09:49.569422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0bd5da2b-f2ec-4313-a738-63373d968a78","Type":"ContainerStarted","Data":"b659793e0c6c126951a95a4877365d87ea079712c2068693b3d9f693d492f940"} Mar 19 17:09:49 crc kubenswrapper[4792]: I0319 17:09:49.598307 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.598287575 podStartE2EDuration="2.598287575s" podCreationTimestamp="2026-03-19 17:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:49.588295611 +0000 UTC m=+1752.734353161" watchObservedRunningTime="2026-03-19 17:09:49.598287575 +0000 UTC m=+1752.744345115" Mar 19 17:09:49 crc kubenswrapper[4792]: I0319 17:09:49.705292 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:49 crc kubenswrapper[4792]: I0319 17:09:49.705345 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:49 crc kubenswrapper[4792]: I0319 17:09:49.754643 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14267d7e-df24-4020-80af-00ef78ef1105" path="/var/lib/kubelet/pods/14267d7e-df24-4020-80af-00ef78ef1105/volumes" Mar 19 17:09:50 crc kubenswrapper[4792]: I0319 17:09:50.605915 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0bd5da2b-f2ec-4313-a738-63373d968a78","Type":"ContainerStarted","Data":"3249fefed58a692dd5018e2b2e71bb073da9292d6de0b2b0889eb23453de60f2"} Mar 19 17:09:50 crc kubenswrapper[4792]: I0319 17:09:50.637601 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.637582005 podStartE2EDuration="2.637582005s" podCreationTimestamp="2026-03-19 17:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:09:50.62247222 +0000 UTC m=+1753.768529760" watchObservedRunningTime="2026-03-19 17:09:50.637582005 +0000 UTC m=+1753.783639545" Mar 19 17:09:50 crc kubenswrapper[4792]: I0319 17:09:50.740561 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:09:50 crc kubenswrapper[4792]: E0319 17:09:50.741088 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:09:50 crc kubenswrapper[4792]: I0319 17:09:50.754869 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9xm4s" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="registry-server" probeResult="failure" output=< Mar 19 17:09:50 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:09:50 crc kubenswrapper[4792]: > Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.131433 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.135601 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.142229 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gdn5p" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.142809 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.164395 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.173066 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.278926 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5th\" (UniqueName: \"kubernetes.io/projected/7d6af4dd-ea78-485e-bb95-dc92993ed452-kube-api-access-2f5th\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.279170 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-config-data\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.279226 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-scripts\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.279325 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.381470 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f5th\" (UniqueName: \"kubernetes.io/projected/7d6af4dd-ea78-485e-bb95-dc92993ed452-kube-api-access-2f5th\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.381903 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-config-data\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.381951 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-scripts\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.382007 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.388222 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-scripts\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.388306 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.388608 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-config-data\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.412501 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f5th\" (UniqueName: \"kubernetes.io/projected/7d6af4dd-ea78-485e-bb95-dc92993ed452-kube-api-access-2f5th\") pod \"aodh-0\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " pod="openstack/aodh-0" Mar 19 17:09:52 crc kubenswrapper[4792]: I0319 17:09:52.486263 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:09:53 crc kubenswrapper[4792]: I0319 17:09:53.115145 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 19 17:09:53 crc kubenswrapper[4792]: W0319 17:09:53.123230 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d6af4dd_ea78_485e_bb95_dc92993ed452.slice/crio-d70f0b5c72bd5c908012a65c29d325aa83339ace18ce61bc0a421e9fc05bed33 WatchSource:0}: Error finding container d70f0b5c72bd5c908012a65c29d325aa83339ace18ce61bc0a421e9fc05bed33: Status 404 returned error can't find the container with id d70f0b5c72bd5c908012a65c29d325aa83339ace18ce61bc0a421e9fc05bed33 Mar 19 17:09:53 crc kubenswrapper[4792]: I0319 17:09:53.658825 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerStarted","Data":"d70f0b5c72bd5c908012a65c29d325aa83339ace18ce61bc0a421e9fc05bed33"} Mar 19 17:09:53 crc kubenswrapper[4792]: I0319 17:09:53.986462 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 17:09:54 crc kubenswrapper[4792]: I0319 17:09:54.672378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerStarted","Data":"91db90c2c9f57962d32afa945048c1466b78413b1c70ca48224a67dca79149c7"} Mar 19 17:09:54 crc kubenswrapper[4792]: I0319 17:09:54.999004 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:54 crc kubenswrapper[4792]: I0319 17:09:54.999354 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="ceilometer-central-agent" containerID="cri-o://e5a0db264e2776ed2eed7d5de042d6eb1dea421af2015f0b3882c4085dba0ee9" gracePeriod=30 Mar 19 17:09:54 crc kubenswrapper[4792]: I0319 17:09:54.999459 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="proxy-httpd" containerID="cri-o://b2de50df3da091c3a148a34a4e1f91826218578c79c663ce87ad1c197e1108f4" gracePeriod=30 Mar 19 17:09:54 crc kubenswrapper[4792]: I0319 17:09:54.999555 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="sg-core" containerID="cri-o://9eb1d28195f4f70348faf9e08b2a603dd12cec9c9264a91157ccf16a78a7cc07" gracePeriod=30 Mar 19 17:09:54 crc kubenswrapper[4792]: I0319 17:09:54.999602 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="ceilometer-notification-agent" containerID="cri-o://33d5ea7d532e3b437af7b46fbd783a65b7b60696da07aa34b6c6afb57da05f59" gracePeriod=30 Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.010802 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.563164 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.688608 4792 generic.go:334] "Generic (PLEG): container finished" podID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerID="b2de50df3da091c3a148a34a4e1f91826218578c79c663ce87ad1c197e1108f4" exitCode=0 Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.688646 4792 generic.go:334] "Generic (PLEG): container finished" podID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerID="9eb1d28195f4f70348faf9e08b2a603dd12cec9c9264a91157ccf16a78a7cc07" exitCode=2 Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.688658 4792 generic.go:334] "Generic (PLEG): container finished" podID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerID="33d5ea7d532e3b437af7b46fbd783a65b7b60696da07aa34b6c6afb57da05f59" exitCode=0 Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.688668 4792 generic.go:334] "Generic (PLEG): container finished" podID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerID="e5a0db264e2776ed2eed7d5de042d6eb1dea421af2015f0b3882c4085dba0ee9" exitCode=0 Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.688700 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerDied","Data":"b2de50df3da091c3a148a34a4e1f91826218578c79c663ce87ad1c197e1108f4"} Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.688743 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerDied","Data":"9eb1d28195f4f70348faf9e08b2a603dd12cec9c9264a91157ccf16a78a7cc07"} Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.688754 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerDied","Data":"33d5ea7d532e3b437af7b46fbd783a65b7b60696da07aa34b6c6afb57da05f59"} Mar 19 17:09:55 crc kubenswrapper[4792]: I0319 17:09:55.688762 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerDied","Data":"e5a0db264e2776ed2eed7d5de042d6eb1dea421af2015f0b3882c4085dba0ee9"} Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.238964 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.382646 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxlck\" (UniqueName: \"kubernetes.io/projected/adc76bf7-5198-40e1-8a3b-0be22a391686-kube-api-access-xxlck\") pod \"adc76bf7-5198-40e1-8a3b-0be22a391686\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.383041 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-log-httpd\") pod \"adc76bf7-5198-40e1-8a3b-0be22a391686\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.383087 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-run-httpd\") pod \"adc76bf7-5198-40e1-8a3b-0be22a391686\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.383151 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-sg-core-conf-yaml\") pod \"adc76bf7-5198-40e1-8a3b-0be22a391686\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.383275 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-config-data\") pod \"adc76bf7-5198-40e1-8a3b-0be22a391686\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.383319 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-combined-ca-bundle\") pod \"adc76bf7-5198-40e1-8a3b-0be22a391686\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.383334 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-scripts\") pod \"adc76bf7-5198-40e1-8a3b-0be22a391686\" (UID: \"adc76bf7-5198-40e1-8a3b-0be22a391686\") " Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.383805 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "adc76bf7-5198-40e1-8a3b-0be22a391686" (UID: "adc76bf7-5198-40e1-8a3b-0be22a391686"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.384690 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.385403 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "adc76bf7-5198-40e1-8a3b-0be22a391686" (UID: "adc76bf7-5198-40e1-8a3b-0be22a391686"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.391255 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-scripts" (OuterVolumeSpecName: "scripts") pod "adc76bf7-5198-40e1-8a3b-0be22a391686" (UID: "adc76bf7-5198-40e1-8a3b-0be22a391686"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.414390 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc76bf7-5198-40e1-8a3b-0be22a391686-kube-api-access-xxlck" (OuterVolumeSpecName: "kube-api-access-xxlck") pod "adc76bf7-5198-40e1-8a3b-0be22a391686" (UID: "adc76bf7-5198-40e1-8a3b-0be22a391686"). InnerVolumeSpecName "kube-api-access-xxlck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.428309 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "adc76bf7-5198-40e1-8a3b-0be22a391686" (UID: "adc76bf7-5198-40e1-8a3b-0be22a391686"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.482262 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adc76bf7-5198-40e1-8a3b-0be22a391686" (UID: "adc76bf7-5198-40e1-8a3b-0be22a391686"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.487749 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxlck\" (UniqueName: \"kubernetes.io/projected/adc76bf7-5198-40e1-8a3b-0be22a391686-kube-api-access-xxlck\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.487787 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adc76bf7-5198-40e1-8a3b-0be22a391686-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.487800 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.487820 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.487832 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.556050 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-config-data" (OuterVolumeSpecName: "config-data") pod "adc76bf7-5198-40e1-8a3b-0be22a391686" (UID: "adc76bf7-5198-40e1-8a3b-0be22a391686"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.590174 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc76bf7-5198-40e1-8a3b-0be22a391686-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.704171 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adc76bf7-5198-40e1-8a3b-0be22a391686","Type":"ContainerDied","Data":"1f92649bed9f69bd9474c1b964629b8464c556db38a5a7c556bc0980666e84e9"} Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.704255 4792 scope.go:117] "RemoveContainer" containerID="b2de50df3da091c3a148a34a4e1f91826218578c79c663ce87ad1c197e1108f4" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.704196 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.708266 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerStarted","Data":"50326bf6fa3a07a36f840845b93e6822fa6056888b7f7d0b3d24ded829d1a2a2"} Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.777631 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.790368 4792 scope.go:117] "RemoveContainer" containerID="9eb1d28195f4f70348faf9e08b2a603dd12cec9c9264a91157ccf16a78a7cc07" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.805182 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.822910 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:56 crc kubenswrapper[4792]: E0319 17:09:56.823564 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="ceilometer-notification-agent" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.823581 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="ceilometer-notification-agent" Mar 19 17:09:56 crc kubenswrapper[4792]: E0319 17:09:56.823619 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="sg-core" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.823629 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="sg-core" Mar 19 17:09:56 crc kubenswrapper[4792]: E0319 17:09:56.823660 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="ceilometer-central-agent" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.823668 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="ceilometer-central-agent" Mar 19 17:09:56 crc kubenswrapper[4792]: E0319 17:09:56.823685 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="proxy-httpd" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.823694 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="proxy-httpd" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.824002 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="sg-core" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.824066 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="ceilometer-central-agent" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.824089 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="ceilometer-notification-agent" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.824110 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" containerName="proxy-httpd" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.826872 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.852119 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.852214 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.853795 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.882991 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.883023 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.900497 4792 scope.go:117] "RemoveContainer" containerID="33d5ea7d532e3b437af7b46fbd783a65b7b60696da07aa34b6c6afb57da05f59" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.944103 4792 scope.go:117] "RemoveContainer" containerID="e5a0db264e2776ed2eed7d5de042d6eb1dea421af2015f0b3882c4085dba0ee9" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.957188 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 17:09:56 crc kubenswrapper[4792]: I0319 17:09:56.957610 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.007073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-scripts\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.007584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-config-data\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.007683 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.007765 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.007805 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrqr\" (UniqueName: \"kubernetes.io/projected/6e522825-72e3-4ad3-a577-c565f61afb35-kube-api-access-mnrqr\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.007874 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-log-httpd\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.007911 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-run-httpd\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.109776 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.109832 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.109871 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrqr\" (UniqueName: \"kubernetes.io/projected/6e522825-72e3-4ad3-a577-c565f61afb35-kube-api-access-mnrqr\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.109911 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-log-httpd\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.109935 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-run-httpd\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.110027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-scripts\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.110120 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-config-data\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.110735 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-run-httpd\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.111027 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-log-httpd\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.114294 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-scripts\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.125349 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-config-data\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.125632 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.127256 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrqr\" (UniqueName: \"kubernetes.io/projected/6e522825-72e3-4ad3-a577-c565f61afb35-kube-api-access-mnrqr\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.128144 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.182002 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.697505 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.756896 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc76bf7-5198-40e1-8a3b-0be22a391686" path="/var/lib/kubelet/pods/adc76bf7-5198-40e1-8a3b-0be22a391686/volumes" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.943448 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.966132 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.7:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.980054 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.7:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.980071 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:09:57 crc kubenswrapper[4792]: I0319 17:09:57.980880 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:09:58 crc kubenswrapper[4792]: W0319 17:09:58.102087 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e522825_72e3_4ad3_a577_c565f61afb35.slice/crio-a94b3323fbf7996b58d07e4e2a13e59438a2697b3456166ca93eaf95f3626686 WatchSource:0}: Error finding container a94b3323fbf7996b58d07e4e2a13e59438a2697b3456166ca93eaf95f3626686: Status 404 returned error can't find the container with id a94b3323fbf7996b58d07e4e2a13e59438a2697b3456166ca93eaf95f3626686 Mar 19 17:09:58 crc kubenswrapper[4792]: I0319 17:09:58.185265 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:09:58 crc kubenswrapper[4792]: I0319 17:09:58.743201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerStarted","Data":"a5819ecc588c431f8a724682fd8f13a1eed5dc7d0490b65174272becc7b549ee"} Mar 19 17:09:58 crc kubenswrapper[4792]: I0319 17:09:58.749563 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerStarted","Data":"a94b3323fbf7996b58d07e4e2a13e59438a2697b3456166ca93eaf95f3626686"} Mar 19 17:09:58 crc kubenswrapper[4792]: I0319 17:09:58.986894 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 17:09:59 crc kubenswrapper[4792]: I0319 17:09:59.035136 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 17:09:59 crc kubenswrapper[4792]: I0319 17:09:59.772213 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerStarted","Data":"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86"} Mar 19 17:09:59 crc kubenswrapper[4792]: I0319 17:09:59.775516 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:09:59 crc kubenswrapper[4792]: I0319 17:09:59.805076 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 17:09:59 crc kubenswrapper[4792]: I0319 17:09:59.861480 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.160225 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565670-wjmq6"] Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.166434 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565670-wjmq6" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.168973 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.171618 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.175354 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.180327 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565670-wjmq6"] Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.292241 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl2fk\" (UniqueName: \"kubernetes.io/projected/28af1b81-6dbb-4b6c-ab1d-774bed9bc419-kube-api-access-wl2fk\") pod \"auto-csr-approver-29565670-wjmq6\" (UID: \"28af1b81-6dbb-4b6c-ab1d-774bed9bc419\") " pod="openshift-infra/auto-csr-approver-29565670-wjmq6" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.394508 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl2fk\" (UniqueName: \"kubernetes.io/projected/28af1b81-6dbb-4b6c-ab1d-774bed9bc419-kube-api-access-wl2fk\") pod \"auto-csr-approver-29565670-wjmq6\" (UID: \"28af1b81-6dbb-4b6c-ab1d-774bed9bc419\") " pod="openshift-infra/auto-csr-approver-29565670-wjmq6" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.414975 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl2fk\" (UniqueName: \"kubernetes.io/projected/28af1b81-6dbb-4b6c-ab1d-774bed9bc419-kube-api-access-wl2fk\") pod \"auto-csr-approver-29565670-wjmq6\" (UID: \"28af1b81-6dbb-4b6c-ab1d-774bed9bc419\") " pod="openshift-infra/auto-csr-approver-29565670-wjmq6" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.510899 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565670-wjmq6" Mar 19 17:10:00 crc kubenswrapper[4792]: I0319 17:10:00.607500 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xm4s"] Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.825371 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565670-wjmq6"] Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.856060 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565670-wjmq6" event={"ID":"28af1b81-6dbb-4b6c-ab1d-774bed9bc419","Type":"ContainerStarted","Data":"4c4954b09268e4106162aad7045f5273f9d323c5958263bb8899fdc8c2abfadf"} Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.858380 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerStarted","Data":"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a"} Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.862481 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9xm4s" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="registry-server" containerID="cri-o://1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a" gracePeriod=2 Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.862744 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-api" containerID="cri-o://91db90c2c9f57962d32afa945048c1466b78413b1c70ca48224a67dca79149c7" gracePeriod=30 Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.862887 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerStarted","Data":"03768a66eac5cea8438b6aa509ee6a5c2533893528703cb3a40c6839aa3ff249"} Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.863037 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-notifier" containerID="cri-o://a5819ecc588c431f8a724682fd8f13a1eed5dc7d0490b65174272becc7b549ee" gracePeriod=30 Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.863153 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-listener" containerID="cri-o://03768a66eac5cea8438b6aa509ee6a5c2533893528703cb3a40c6839aa3ff249" gracePeriod=30 Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.863231 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-evaluator" containerID="cri-o://50326bf6fa3a07a36f840845b93e6822fa6056888b7f7d0b3d24ded829d1a2a2" gracePeriod=30 Mar 19 17:10:01 crc kubenswrapper[4792]: I0319 17:10:01.904833 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.8112279660000001 podStartE2EDuration="9.904807858s" podCreationTimestamp="2026-03-19 17:09:52 +0000 UTC" firstStartedPulling="2026-03-19 17:09:53.128650028 +0000 UTC m=+1756.274707568" lastFinishedPulling="2026-03-19 17:10:01.22222992 +0000 UTC m=+1764.368287460" observedRunningTime="2026-03-19 17:10:01.900742126 +0000 UTC m=+1765.046799666" watchObservedRunningTime="2026-03-19 17:10:01.904807858 +0000 UTC m=+1765.050865408" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.414785 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.453190 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mqnt\" (UniqueName: \"kubernetes.io/projected/eccea905-8d78-4f68-865f-af56721dbe2d-kube-api-access-9mqnt\") pod \"eccea905-8d78-4f68-865f-af56721dbe2d\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.453382 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-utilities\") pod \"eccea905-8d78-4f68-865f-af56721dbe2d\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.453606 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-catalog-content\") pod \"eccea905-8d78-4f68-865f-af56721dbe2d\" (UID: \"eccea905-8d78-4f68-865f-af56721dbe2d\") " Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.455712 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-utilities" (OuterVolumeSpecName: "utilities") pod "eccea905-8d78-4f68-865f-af56721dbe2d" (UID: "eccea905-8d78-4f68-865f-af56721dbe2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.474347 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eccea905-8d78-4f68-865f-af56721dbe2d-kube-api-access-9mqnt" (OuterVolumeSpecName: "kube-api-access-9mqnt") pod "eccea905-8d78-4f68-865f-af56721dbe2d" (UID: "eccea905-8d78-4f68-865f-af56721dbe2d"). InnerVolumeSpecName "kube-api-access-9mqnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.556620 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mqnt\" (UniqueName: \"kubernetes.io/projected/eccea905-8d78-4f68-865f-af56721dbe2d-kube-api-access-9mqnt\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.557938 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.562906 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eccea905-8d78-4f68-865f-af56721dbe2d" (UID: "eccea905-8d78-4f68-865f-af56721dbe2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.659686 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccea905-8d78-4f68-865f-af56721dbe2d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.880000 4792 generic.go:334] "Generic (PLEG): container finished" podID="eccea905-8d78-4f68-865f-af56721dbe2d" containerID="1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a" exitCode=0 Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.880368 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm4s" event={"ID":"eccea905-8d78-4f68-865f-af56721dbe2d","Type":"ContainerDied","Data":"1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a"} Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.880401 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm4s" event={"ID":"eccea905-8d78-4f68-865f-af56721dbe2d","Type":"ContainerDied","Data":"052a0f6ba21a217b4821ef2287a375cf69d8ad570663b5bbe2ff40c73ca4527e"} Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.880421 4792 scope.go:117] "RemoveContainer" containerID="1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.880591 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xm4s" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.892215 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerStarted","Data":"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e"} Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.896660 4792 generic.go:334] "Generic (PLEG): container finished" podID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerID="50326bf6fa3a07a36f840845b93e6822fa6056888b7f7d0b3d24ded829d1a2a2" exitCode=0 Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.896690 4792 generic.go:334] "Generic (PLEG): container finished" podID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerID="91db90c2c9f57962d32afa945048c1466b78413b1c70ca48224a67dca79149c7" exitCode=0 Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.896712 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerDied","Data":"50326bf6fa3a07a36f840845b93e6822fa6056888b7f7d0b3d24ded829d1a2a2"} Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.896736 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerDied","Data":"91db90c2c9f57962d32afa945048c1466b78413b1c70ca48224a67dca79149c7"} Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.922230 4792 scope.go:117] "RemoveContainer" containerID="2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25" Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.924599 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xm4s"] Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.940037 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9xm4s"] Mar 19 17:10:02 crc kubenswrapper[4792]: I0319 17:10:02.975773 4792 scope.go:117] "RemoveContainer" containerID="bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b" Mar 19 17:10:03 crc kubenswrapper[4792]: I0319 17:10:03.018695 4792 scope.go:117] "RemoveContainer" containerID="1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a" Mar 19 17:10:03 crc kubenswrapper[4792]: E0319 17:10:03.019086 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a\": container with ID starting with 1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a not found: ID does not exist" containerID="1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a" Mar 19 17:10:03 crc kubenswrapper[4792]: I0319 17:10:03.019124 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a"} err="failed to get container status \"1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a\": rpc error: code = NotFound desc = could not find container \"1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a\": container with ID starting with 1000af29e26f241912526cc370933a35d503cb69ab01adafb54e4377ab034d2a not found: ID does not exist" Mar 19 17:10:03 crc kubenswrapper[4792]: I0319 17:10:03.019146 4792 scope.go:117] "RemoveContainer" containerID="2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25" Mar 19 17:10:03 crc kubenswrapper[4792]: E0319 17:10:03.019541 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25\": container with ID starting with 2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25 not found: ID does not exist" containerID="2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25" Mar 19 17:10:03 crc kubenswrapper[4792]: I0319 17:10:03.019582 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25"} err="failed to get container status \"2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25\": rpc error: code = NotFound desc = could not find container \"2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25\": container with ID starting with 2653004a5be45e3236d9a6c8480856582ee7166fe9954251c2739a1a87731c25 not found: ID does not exist" Mar 19 17:10:03 crc kubenswrapper[4792]: I0319 17:10:03.019608 4792 scope.go:117] "RemoveContainer" containerID="bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b" Mar 19 17:10:03 crc kubenswrapper[4792]: E0319 17:10:03.020262 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b\": container with ID starting with bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b not found: ID does not exist" containerID="bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b" Mar 19 17:10:03 crc kubenswrapper[4792]: I0319 17:10:03.020314 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b"} err="failed to get container status \"bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b\": rpc error: code = NotFound desc = could not find container \"bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b\": container with ID starting with bffbf91498feb37ca5b9d6b5581f881c2d0758ea68e775f0d11ed371fa9da92b not found: ID does not exist" Mar 19 17:10:03 crc kubenswrapper[4792]: I0319 17:10:03.778689 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" path="/var/lib/kubelet/pods/eccea905-8d78-4f68-865f-af56721dbe2d/volumes" Mar 19 17:10:04 crc kubenswrapper[4792]: I0319 17:10:04.878609 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 17:10:04 crc kubenswrapper[4792]: I0319 17:10:04.878991 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 17:10:04 crc kubenswrapper[4792]: I0319 17:10:04.929245 4792 generic.go:334] "Generic (PLEG): container finished" podID="28af1b81-6dbb-4b6c-ab1d-774bed9bc419" containerID="60d45b0fc68c6ad678ddc6f7e4ac6ef7051520603e10954c066f5155e487aa19" exitCode=0 Mar 19 17:10:04 crc kubenswrapper[4792]: I0319 17:10:04.929301 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565670-wjmq6" event={"ID":"28af1b81-6dbb-4b6c-ab1d-774bed9bc419","Type":"ContainerDied","Data":"60d45b0fc68c6ad678ddc6f7e4ac6ef7051520603e10954c066f5155e487aa19"} Mar 19 17:10:04 crc kubenswrapper[4792]: I0319 17:10:04.956857 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 17:10:04 crc kubenswrapper[4792]: I0319 17:10:04.957944 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 17:10:05 crc kubenswrapper[4792]: I0319 17:10:05.741161 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:10:05 crc kubenswrapper[4792]: E0319 17:10:05.741772 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:10:05 crc kubenswrapper[4792]: I0319 17:10:05.945437 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerStarted","Data":"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4"} Mar 19 17:10:05 crc kubenswrapper[4792]: I0319 17:10:05.945776 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="ceilometer-central-agent" containerID="cri-o://11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86" gracePeriod=30 Mar 19 17:10:05 crc kubenswrapper[4792]: I0319 17:10:05.945862 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:10:05 crc kubenswrapper[4792]: I0319 17:10:05.945925 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="proxy-httpd" containerID="cri-o://7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4" gracePeriod=30 Mar 19 17:10:05 crc kubenswrapper[4792]: I0319 17:10:05.946021 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="ceilometer-notification-agent" containerID="cri-o://91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a" gracePeriod=30 Mar 19 17:10:05 crc kubenswrapper[4792]: I0319 17:10:05.946074 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="sg-core" containerID="cri-o://f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e" gracePeriod=30 Mar 19 17:10:05 crc kubenswrapper[4792]: I0319 17:10:05.993579 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.387817282 podStartE2EDuration="9.99356235s" podCreationTimestamp="2026-03-19 17:09:56 +0000 UTC" firstStartedPulling="2026-03-19 17:09:58.107966478 +0000 UTC m=+1761.254024018" lastFinishedPulling="2026-03-19 17:10:04.713711546 +0000 UTC m=+1767.859769086" observedRunningTime="2026-03-19 17:10:05.980772299 +0000 UTC m=+1769.126829839" watchObservedRunningTime="2026-03-19 17:10:05.99356235 +0000 UTC m=+1769.139619890" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.519571 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565670-wjmq6" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.562935 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl2fk\" (UniqueName: \"kubernetes.io/projected/28af1b81-6dbb-4b6c-ab1d-774bed9bc419-kube-api-access-wl2fk\") pod \"28af1b81-6dbb-4b6c-ab1d-774bed9bc419\" (UID: \"28af1b81-6dbb-4b6c-ab1d-774bed9bc419\") " Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.578789 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28af1b81-6dbb-4b6c-ab1d-774bed9bc419-kube-api-access-wl2fk" (OuterVolumeSpecName: "kube-api-access-wl2fk") pod "28af1b81-6dbb-4b6c-ab1d-774bed9bc419" (UID: "28af1b81-6dbb-4b6c-ab1d-774bed9bc419"). InnerVolumeSpecName "kube-api-access-wl2fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.666362 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl2fk\" (UniqueName: \"kubernetes.io/projected/28af1b81-6dbb-4b6c-ab1d-774bed9bc419-kube-api-access-wl2fk\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.746370 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.869993 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-combined-ca-bundle\") pod \"6e522825-72e3-4ad3-a577-c565f61afb35\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.870158 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-sg-core-conf-yaml\") pod \"6e522825-72e3-4ad3-a577-c565f61afb35\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.870262 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-run-httpd\") pod \"6e522825-72e3-4ad3-a577-c565f61afb35\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.870305 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-scripts\") pod \"6e522825-72e3-4ad3-a577-c565f61afb35\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.870338 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-log-httpd\") pod \"6e522825-72e3-4ad3-a577-c565f61afb35\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.870418 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-config-data\") pod \"6e522825-72e3-4ad3-a577-c565f61afb35\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.870492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrqr\" (UniqueName: \"kubernetes.io/projected/6e522825-72e3-4ad3-a577-c565f61afb35-kube-api-access-mnrqr\") pod \"6e522825-72e3-4ad3-a577-c565f61afb35\" (UID: \"6e522825-72e3-4ad3-a577-c565f61afb35\") " Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.870511 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e522825-72e3-4ad3-a577-c565f61afb35" (UID: "6e522825-72e3-4ad3-a577-c565f61afb35"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.870875 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e522825-72e3-4ad3-a577-c565f61afb35" (UID: "6e522825-72e3-4ad3-a577-c565f61afb35"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.872367 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.872411 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e522825-72e3-4ad3-a577-c565f61afb35-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.875087 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-scripts" (OuterVolumeSpecName: "scripts") pod "6e522825-72e3-4ad3-a577-c565f61afb35" (UID: "6e522825-72e3-4ad3-a577-c565f61afb35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.875098 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e522825-72e3-4ad3-a577-c565f61afb35-kube-api-access-mnrqr" (OuterVolumeSpecName: "kube-api-access-mnrqr") pod "6e522825-72e3-4ad3-a577-c565f61afb35" (UID: "6e522825-72e3-4ad3-a577-c565f61afb35"). InnerVolumeSpecName "kube-api-access-mnrqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.882933 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.884148 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.895220 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.907646 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e522825-72e3-4ad3-a577-c565f61afb35" (UID: "6e522825-72e3-4ad3-a577-c565f61afb35"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.961666 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.964384 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.971333 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565670-wjmq6" event={"ID":"28af1b81-6dbb-4b6c-ab1d-774bed9bc419","Type":"ContainerDied","Data":"4c4954b09268e4106162aad7045f5273f9d323c5958263bb8899fdc8c2abfadf"} Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.971366 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c4954b09268e4106162aad7045f5273f9d323c5958263bb8899fdc8c2abfadf" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.971472 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565670-wjmq6" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.974534 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.974554 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.974564 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrqr\" (UniqueName: \"kubernetes.io/projected/6e522825-72e3-4ad3-a577-c565f61afb35-kube-api-access-mnrqr\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.975780 4792 generic.go:334] "Generic (PLEG): container finished" podID="6e522825-72e3-4ad3-a577-c565f61afb35" containerID="7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4" exitCode=0 Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.975801 4792 generic.go:334] "Generic (PLEG): container finished" podID="6e522825-72e3-4ad3-a577-c565f61afb35" containerID="f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e" exitCode=2 Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.975810 4792 generic.go:334] "Generic (PLEG): container finished" podID="6e522825-72e3-4ad3-a577-c565f61afb35" containerID="91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a" exitCode=0 Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.975818 4792 generic.go:334] "Generic (PLEG): container finished" podID="6e522825-72e3-4ad3-a577-c565f61afb35" containerID="11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86" exitCode=0 Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.976058 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerDied","Data":"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4"} Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.976118 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerDied","Data":"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e"} Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.976133 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerDied","Data":"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a"} Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.976146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerDied","Data":"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86"} Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.976159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e522825-72e3-4ad3-a577-c565f61afb35","Type":"ContainerDied","Data":"a94b3323fbf7996b58d07e4e2a13e59438a2697b3456166ca93eaf95f3626686"} Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.976178 4792 scope.go:117] "RemoveContainer" containerID="7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.976370 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.985996 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 17:10:06 crc kubenswrapper[4792]: I0319 17:10:06.988634 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.000151 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.016177 4792 scope.go:117] "RemoveContainer" containerID="f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.036871 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e522825-72e3-4ad3-a577-c565f61afb35" (UID: "6e522825-72e3-4ad3-a577-c565f61afb35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.044556 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-config-data" (OuterVolumeSpecName: "config-data") pod "6e522825-72e3-4ad3-a577-c565f61afb35" (UID: "6e522825-72e3-4ad3-a577-c565f61afb35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.059071 4792 scope.go:117] "RemoveContainer" containerID="91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.076614 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.076659 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e522825-72e3-4ad3-a577-c565f61afb35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.194104 4792 scope.go:117] "RemoveContainer" containerID="11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.287632 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-pwm7z"] Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.289250 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="extract-content" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289289 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="extract-content" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.289299 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="ceilometer-central-agent" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289313 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="ceilometer-central-agent" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.289331 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="sg-core" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289337 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="sg-core" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.289352 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="ceilometer-notification-agent" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289358 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="ceilometer-notification-agent" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.289371 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="registry-server" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289378 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="registry-server" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.289390 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="proxy-httpd" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289396 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="proxy-httpd" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.289411 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28af1b81-6dbb-4b6c-ab1d-774bed9bc419" containerName="oc" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289418 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="28af1b81-6dbb-4b6c-ab1d-774bed9bc419" containerName="oc" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.289429 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="extract-utilities" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289436 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="extract-utilities" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289782 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="sg-core" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289797 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="proxy-httpd" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289812 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccea905-8d78-4f68-865f-af56721dbe2d" containerName="registry-server" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289828 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="ceilometer-central-agent" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289857 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" containerName="ceilometer-notification-agent" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.289871 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="28af1b81-6dbb-4b6c-ab1d-774bed9bc419" containerName="oc" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.291178 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.306608 4792 scope.go:117] "RemoveContainer" containerID="7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.306945 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": container with ID starting with 7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4 not found: ID does not exist" containerID="7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.306969 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4"} err="failed to get container status \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": rpc error: code = NotFound desc = could not find container \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": container with ID starting with 7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4 not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.306987 4792 scope.go:117] "RemoveContainer" containerID="f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.307339 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": container with ID starting with f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e not found: ID does not exist" containerID="f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.307356 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e"} err="failed to get container status \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": rpc error: code = NotFound desc = could not find container \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": container with ID starting with f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.307375 4792 scope.go:117] "RemoveContainer" containerID="91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.307621 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": container with ID starting with 91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a not found: ID does not exist" containerID="91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.307639 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a"} err="failed to get container status \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": rpc error: code = NotFound desc = could not find container \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": container with ID starting with 91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.307651 4792 scope.go:117] "RemoveContainer" containerID="11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86" Mar 19 17:10:07 crc kubenswrapper[4792]: E0319 17:10:07.307914 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": container with ID starting with 11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86 not found: ID does not exist" containerID="11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.307930 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86"} err="failed to get container status \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": rpc error: code = NotFound desc = could not find container \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": container with ID starting with 11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86 not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.307942 4792 scope.go:117] "RemoveContainer" containerID="7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.308156 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4"} err="failed to get container status \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": rpc error: code = NotFound desc = could not find container \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": container with ID starting with 7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4 not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.308179 4792 scope.go:117] "RemoveContainer" containerID="f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.308629 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e"} err="failed to get container status \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": rpc error: code = NotFound desc = could not find container \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": container with ID starting with f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.308661 4792 scope.go:117] "RemoveContainer" containerID="91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.308861 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a"} err="failed to get container status \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": rpc error: code = NotFound desc = could not find container \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": container with ID starting with 91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.308875 4792 scope.go:117] "RemoveContainer" containerID="11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.309057 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86"} err="failed to get container status \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": rpc error: code = NotFound desc = could not find container \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": container with ID starting with 11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86 not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.309070 4792 scope.go:117] "RemoveContainer" containerID="7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.313411 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4"} err="failed to get container status \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": rpc error: code = NotFound desc = could not find container \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": container with ID starting with 7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4 not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.313440 4792 scope.go:117] "RemoveContainer" containerID="f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.315639 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e"} err="failed to get container status \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": rpc error: code = NotFound desc = could not find container \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": container with ID starting with f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.315663 4792 scope.go:117] "RemoveContainer" containerID="91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.318107 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-pwm7z"] Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.320904 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a"} err="failed to get container status \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": rpc error: code = NotFound desc = could not find container \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": container with ID starting with 91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.320944 4792 scope.go:117] "RemoveContainer" containerID="11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.321286 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86"} err="failed to get container status \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": rpc error: code = NotFound desc = could not find container \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": container with ID starting with 11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86 not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.321313 4792 scope.go:117] "RemoveContainer" containerID="7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.321713 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4"} err="failed to get container status \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": rpc error: code = NotFound desc = could not find container \"7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4\": container with ID starting with 7608b715a1dc17f373b507a00e3fc0e174a5de052324260905b0cc9c8a41aae4 not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.321763 4792 scope.go:117] "RemoveContainer" containerID="f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.322059 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e"} err="failed to get container status \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": rpc error: code = NotFound desc = could not find container \"f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e\": container with ID starting with f4e71eef33f32890666e8efd72ce8791c126e82ff73c26317773dbd3924fe78e not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.322086 4792 scope.go:117] "RemoveContainer" containerID="91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.322311 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a"} err="failed to get container status \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": rpc error: code = NotFound desc = could not find container \"91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a\": container with ID starting with 91fa4454ed46a1874d2f60549c135280b6518765cb09fa3f99b3116055103f3a not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.322357 4792 scope.go:117] "RemoveContainer" containerID="11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.322583 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86"} err="failed to get container status \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": rpc error: code = NotFound desc = could not find container \"11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86\": container with ID starting with 11918c62955c32e964f382dff2ee09c7446c67afc7c7e4a97ad4bc17f984ce86 not found: ID does not exist" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.387177 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-config\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.387623 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.387871 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.387918 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.387941 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.387974 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgt4f\" (UniqueName: \"kubernetes.io/projected/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-kube-api-access-dgt4f\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.442963 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.462982 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.476974 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.490223 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.490281 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.490304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.490327 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgt4f\" (UniqueName: \"kubernetes.io/projected/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-kube-api-access-dgt4f\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.490402 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-config\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.490464 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.490921 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.491046 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.491443 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.495597 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.495794 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.497604 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.498122 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.498215 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-config\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.504227 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.510055 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgt4f\" (UniqueName: \"kubernetes.io/projected/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-kube-api-access-dgt4f\") pod \"dnsmasq-dns-6d99f6bc7f-pwm7z\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.591730 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.592716 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-config-data\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.592751 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-log-httpd\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.592806 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-scripts\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.592870 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfqlm\" (UniqueName: \"kubernetes.io/projected/6ffe8952-c448-4573-b21e-27a0db808dd5-kube-api-access-qfqlm\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.592909 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-run-httpd\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.592937 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.626332 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565664-j9rd2"] Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.636221 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.637730 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565664-j9rd2"] Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.699097 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.699468 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-config-data\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.699527 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-log-httpd\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.699608 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-scripts\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.699686 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfqlm\" (UniqueName: \"kubernetes.io/projected/6ffe8952-c448-4573-b21e-27a0db808dd5-kube-api-access-qfqlm\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.699833 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-run-httpd\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.699904 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.703461 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-log-httpd\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.703967 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-config-data\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.704223 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-run-httpd\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.704346 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.704686 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-scripts\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.710285 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.729926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfqlm\" (UniqueName: \"kubernetes.io/projected/6ffe8952-c448-4573-b21e-27a0db808dd5-kube-api-access-qfqlm\") pod \"ceilometer-0\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " pod="openstack/ceilometer-0" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.769742 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e522825-72e3-4ad3-a577-c565f61afb35" path="/var/lib/kubelet/pods/6e522825-72e3-4ad3-a577-c565f61afb35/volumes" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.775041 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe46a20a-3d00-4840-b3f0-08a10149eefd" path="/var/lib/kubelet/pods/fe46a20a-3d00-4840-b3f0-08a10149eefd/volumes" Mar 19 17:10:07 crc kubenswrapper[4792]: I0319 17:10:07.887078 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:08 crc kubenswrapper[4792]: I0319 17:10:08.324565 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-pwm7z"] Mar 19 17:10:08 crc kubenswrapper[4792]: I0319 17:10:08.567492 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:08 crc kubenswrapper[4792]: W0319 17:10:08.573092 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ffe8952_c448_4573_b21e_27a0db808dd5.slice/crio-b5449889568ce4c0689d9a1674dc2d12c422ef4b859532e46d9d606e1870dea2 WatchSource:0}: Error finding container b5449889568ce4c0689d9a1674dc2d12c422ef4b859532e46d9d606e1870dea2: Status 404 returned error can't find the container with id b5449889568ce4c0689d9a1674dc2d12c422ef4b859532e46d9d606e1870dea2 Mar 19 17:10:09 crc kubenswrapper[4792]: I0319 17:10:09.030406 4792 generic.go:334] "Generic (PLEG): container finished" podID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" containerID="500ffbd3ada82aae55aef4e73f7b140332734416488bdd4fd49bb0859faa14fa" exitCode=0 Mar 19 17:10:09 crc kubenswrapper[4792]: I0319 17:10:09.032195 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" event={"ID":"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c","Type":"ContainerDied","Data":"500ffbd3ada82aae55aef4e73f7b140332734416488bdd4fd49bb0859faa14fa"} Mar 19 17:10:09 crc kubenswrapper[4792]: I0319 17:10:09.032226 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" event={"ID":"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c","Type":"ContainerStarted","Data":"d3e62a0c8412fc09bfecbe98d052adb0e659a85df1d9c83769e2823604e31cf1"} Mar 19 17:10:09 crc kubenswrapper[4792]: I0319 17:10:09.039534 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerStarted","Data":"b5449889568ce4c0689d9a1674dc2d12c422ef4b859532e46d9d606e1870dea2"} Mar 19 17:10:10 crc kubenswrapper[4792]: I0319 17:10:10.050892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerStarted","Data":"4bc5a05413e5e2b531a68186d33151c9ba55e703e341a758957000414e23a12e"} Mar 19 17:10:10 crc kubenswrapper[4792]: I0319 17:10:10.054727 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" event={"ID":"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c","Type":"ContainerStarted","Data":"d1c2a7bb254c478ed8f69e6fbad542ac1890be53f4ce20a5ebe938d55061184e"} Mar 19 17:10:10 crc kubenswrapper[4792]: I0319 17:10:10.055091 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:10 crc kubenswrapper[4792]: I0319 17:10:10.097324 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:10 crc kubenswrapper[4792]: I0319 17:10:10.098356 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-api" containerID="cri-o://355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b" gracePeriod=30 Mar 19 17:10:10 crc kubenswrapper[4792]: I0319 17:10:10.101227 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-log" containerID="cri-o://755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb" gracePeriod=30 Mar 19 17:10:10 crc kubenswrapper[4792]: I0319 17:10:10.103948 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" podStartSLOduration=3.103927686 podStartE2EDuration="3.103927686s" podCreationTimestamp="2026-03-19 17:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:10:10.10078203 +0000 UTC m=+1773.246839590" watchObservedRunningTime="2026-03-19 17:10:10.103927686 +0000 UTC m=+1773.249985226" Mar 19 17:10:10 crc kubenswrapper[4792]: I0319 17:10:10.654980 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:11 crc kubenswrapper[4792]: I0319 17:10:11.067910 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerStarted","Data":"fe27ba4ea0fedff04363a0fdca368804063f4a47e94f1dcd031115cea9503a76"} Mar 19 17:10:11 crc kubenswrapper[4792]: I0319 17:10:11.069814 4792 generic.go:334] "Generic (PLEG): container finished" podID="e964f368-62cf-4886-a9be-d8536db1ee92" containerID="755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb" exitCode=143 Mar 19 17:10:11 crc kubenswrapper[4792]: I0319 17:10:11.069885 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e964f368-62cf-4886-a9be-d8536db1ee92","Type":"ContainerDied","Data":"755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb"} Mar 19 17:10:12 crc kubenswrapper[4792]: I0319 17:10:12.084213 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerStarted","Data":"5aca78e91c27c60e51b5222cd7268fb1f170e79d2a61f61d2e7cbdc08a60e901"} Mar 19 17:10:12 crc kubenswrapper[4792]: I0319 17:10:12.907769 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.033152 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-combined-ca-bundle\") pod \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.033444 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf7pz\" (UniqueName: \"kubernetes.io/projected/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-kube-api-access-wf7pz\") pod \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.033581 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-config-data\") pod \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\" (UID: \"b7219fed-c8eb-4a92-9ff8-176b80d21e7c\") " Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.039305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-kube-api-access-wf7pz" (OuterVolumeSpecName: "kube-api-access-wf7pz") pod "b7219fed-c8eb-4a92-9ff8-176b80d21e7c" (UID: "b7219fed-c8eb-4a92-9ff8-176b80d21e7c"). InnerVolumeSpecName "kube-api-access-wf7pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.089324 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7219fed-c8eb-4a92-9ff8-176b80d21e7c" (UID: "b7219fed-c8eb-4a92-9ff8-176b80d21e7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.097104 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-config-data" (OuterVolumeSpecName: "config-data") pod "b7219fed-c8eb-4a92-9ff8-176b80d21e7c" (UID: "b7219fed-c8eb-4a92-9ff8-176b80d21e7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.103715 4792 generic.go:334] "Generic (PLEG): container finished" podID="b7219fed-c8eb-4a92-9ff8-176b80d21e7c" containerID="13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64" exitCode=137 Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.103761 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b7219fed-c8eb-4a92-9ff8-176b80d21e7c","Type":"ContainerDied","Data":"13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64"} Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.103770 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.103787 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b7219fed-c8eb-4a92-9ff8-176b80d21e7c","Type":"ContainerDied","Data":"17407790afde203215cbe6a7765d45b5baa23e537ae8dec401ecd1454436275a"} Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.103804 4792 scope.go:117] "RemoveContainer" containerID="13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.136484 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.136520 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.136534 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf7pz\" (UniqueName: \"kubernetes.io/projected/b7219fed-c8eb-4a92-9ff8-176b80d21e7c-kube-api-access-wf7pz\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.172389 4792 scope.go:117] "RemoveContainer" containerID="13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64" Mar 19 17:10:13 crc kubenswrapper[4792]: E0319 17:10:13.172776 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64\": container with ID starting with 13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64 not found: ID does not exist" containerID="13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.172807 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64"} err="failed to get container status \"13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64\": rpc error: code = NotFound desc = could not find container \"13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64\": container with ID starting with 13552fcaca46342037eb16169fb434b7cf8ac612f1d36979375e00352ec4db64 not found: ID does not exist" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.188064 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.265959 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.279657 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:10:13 crc kubenswrapper[4792]: E0319 17:10:13.280596 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7219fed-c8eb-4a92-9ff8-176b80d21e7c" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.280625 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7219fed-c8eb-4a92-9ff8-176b80d21e7c" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.281049 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7219fed-c8eb-4a92-9ff8-176b80d21e7c" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.282667 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.284645 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.284918 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.285048 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.289248 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.455405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjvb\" (UniqueName: \"kubernetes.io/projected/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-kube-api-access-vqjvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.455456 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.455497 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.455552 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.455624 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.560097 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.560271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.560393 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.560544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqjvb\" (UniqueName: \"kubernetes.io/projected/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-kube-api-access-vqjvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.560589 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.565163 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.566522 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.571968 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.578342 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.581505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqjvb\" (UniqueName: \"kubernetes.io/projected/fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4-kube-api-access-vqjvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.651862 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.767497 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7219fed-c8eb-4a92-9ff8-176b80d21e7c" path="/var/lib/kubelet/pods/b7219fed-c8eb-4a92-9ff8-176b80d21e7c/volumes" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.845266 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.973494 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-combined-ca-bundle\") pod \"e964f368-62cf-4886-a9be-d8536db1ee92\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.973591 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-config-data\") pod \"e964f368-62cf-4886-a9be-d8536db1ee92\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.973821 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8xmt\" (UniqueName: \"kubernetes.io/projected/e964f368-62cf-4886-a9be-d8536db1ee92-kube-api-access-r8xmt\") pod \"e964f368-62cf-4886-a9be-d8536db1ee92\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.973865 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e964f368-62cf-4886-a9be-d8536db1ee92-logs\") pod \"e964f368-62cf-4886-a9be-d8536db1ee92\" (UID: \"e964f368-62cf-4886-a9be-d8536db1ee92\") " Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.974990 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e964f368-62cf-4886-a9be-d8536db1ee92-logs" (OuterVolumeSpecName: "logs") pod "e964f368-62cf-4886-a9be-d8536db1ee92" (UID: "e964f368-62cf-4886-a9be-d8536db1ee92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:13 crc kubenswrapper[4792]: I0319 17:10:13.995035 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e964f368-62cf-4886-a9be-d8536db1ee92-kube-api-access-r8xmt" (OuterVolumeSpecName: "kube-api-access-r8xmt") pod "e964f368-62cf-4886-a9be-d8536db1ee92" (UID: "e964f368-62cf-4886-a9be-d8536db1ee92"). InnerVolumeSpecName "kube-api-access-r8xmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.054981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e964f368-62cf-4886-a9be-d8536db1ee92" (UID: "e964f368-62cf-4886-a9be-d8536db1ee92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.066198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-config-data" (OuterVolumeSpecName: "config-data") pod "e964f368-62cf-4886-a9be-d8536db1ee92" (UID: "e964f368-62cf-4886-a9be-d8536db1ee92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.076521 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8xmt\" (UniqueName: \"kubernetes.io/projected/e964f368-62cf-4886-a9be-d8536db1ee92-kube-api-access-r8xmt\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.076567 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e964f368-62cf-4886-a9be-d8536db1ee92-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.076580 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.076596 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e964f368-62cf-4886-a9be-d8536db1ee92-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.167671 4792 generic.go:334] "Generic (PLEG): container finished" podID="e964f368-62cf-4886-a9be-d8536db1ee92" containerID="355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b" exitCode=0 Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.167753 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.167775 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e964f368-62cf-4886-a9be-d8536db1ee92","Type":"ContainerDied","Data":"355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b"} Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.167804 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e964f368-62cf-4886-a9be-d8536db1ee92","Type":"ContainerDied","Data":"dc50caaef981856ead06bed429331c6955c3eb1fa02a8368fc6bde4b4f771db9"} Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.167821 4792 scope.go:117] "RemoveContainer" containerID="355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.195794 4792 scope.go:117] "RemoveContainer" containerID="755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.216958 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.241056 4792 scope.go:117] "RemoveContainer" containerID="355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b" Mar 19 17:10:14 crc kubenswrapper[4792]: E0319 17:10:14.241651 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b\": container with ID starting with 355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b not found: ID does not exist" containerID="355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.241690 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b"} err="failed to get container status \"355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b\": rpc error: code = NotFound desc = could not find container \"355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b\": container with ID starting with 355ef34d6b4cf168a30d9a08f228df429c27530652722312e2cd67ec3ef0aa2b not found: ID does not exist" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.241716 4792 scope.go:117] "RemoveContainer" containerID="755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb" Mar 19 17:10:14 crc kubenswrapper[4792]: E0319 17:10:14.251996 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb\": container with ID starting with 755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb not found: ID does not exist" containerID="755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.252047 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb"} err="failed to get container status \"755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb\": rpc error: code = NotFound desc = could not find container \"755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb\": container with ID starting with 755e8d8b7cde6157e8d4c0ab6271c7941196786052453d7fb6eb5fd060e990eb not found: ID does not exist" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.259577 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.274806 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:14 crc kubenswrapper[4792]: E0319 17:10:14.275475 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-log" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.275500 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-log" Mar 19 17:10:14 crc kubenswrapper[4792]: E0319 17:10:14.275546 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-api" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.275553 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-api" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.275864 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-log" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.275899 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" containerName="nova-api-api" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.277628 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.281705 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.282486 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.282740 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.297220 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.306629 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.382386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.382467 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-public-tls-certs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.382619 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x97x7\" (UniqueName: \"kubernetes.io/projected/f5448145-bfc4-4a5e-a7d8-939b985b2272-kube-api-access-x97x7\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.382683 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.382753 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-config-data\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.382783 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5448145-bfc4-4a5e-a7d8-939b985b2272-logs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.485016 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-config-data\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.485328 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5448145-bfc4-4a5e-a7d8-939b985b2272-logs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.485415 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.485453 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-public-tls-certs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.485551 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x97x7\" (UniqueName: \"kubernetes.io/projected/f5448145-bfc4-4a5e-a7d8-939b985b2272-kube-api-access-x97x7\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.485589 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.486762 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5448145-bfc4-4a5e-a7d8-939b985b2272-logs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.505634 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-public-tls-certs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.506094 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.507590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-config-data\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.508097 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x97x7\" (UniqueName: \"kubernetes.io/projected/f5448145-bfc4-4a5e-a7d8-939b985b2272-kube-api-access-x97x7\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.515256 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " pod="openstack/nova-api-0" Mar 19 17:10:14 crc kubenswrapper[4792]: I0319 17:10:14.602558 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.109656 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.198696 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerStarted","Data":"38f4dc509c4de4086d81c080a5bc3674b3855afd052c101d9616e4f178f23030"} Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.198902 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="ceilometer-central-agent" containerID="cri-o://4bc5a05413e5e2b531a68186d33151c9ba55e703e341a758957000414e23a12e" gracePeriod=30 Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.199153 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.199480 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="proxy-httpd" containerID="cri-o://38f4dc509c4de4086d81c080a5bc3674b3855afd052c101d9616e4f178f23030" gracePeriod=30 Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.199530 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="sg-core" containerID="cri-o://5aca78e91c27c60e51b5222cd7268fb1f170e79d2a61f61d2e7cbdc08a60e901" gracePeriod=30 Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.199566 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="ceilometer-notification-agent" containerID="cri-o://fe27ba4ea0fedff04363a0fdca368804063f4a47e94f1dcd031115cea9503a76" gracePeriod=30 Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.210551 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5448145-bfc4-4a5e-a7d8-939b985b2272","Type":"ContainerStarted","Data":"909767478c71bb7b6e28bafaecf358886137772571ae6526b97914fb441eef56"} Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.217766 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4","Type":"ContainerStarted","Data":"21099a40f6b010d5297e443ac1ef4c576bd9669ee9a0bb15d0b08d4836e00c07"} Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.217812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4","Type":"ContainerStarted","Data":"1e36fa1b26e330634f2751a5408a87774b1bfcda6c02991f4eaeeae70badce79"} Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.237680 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.059268149 podStartE2EDuration="8.237663856s" podCreationTimestamp="2026-03-19 17:10:07 +0000 UTC" firstStartedPulling="2026-03-19 17:10:08.587506528 +0000 UTC m=+1771.733564058" lastFinishedPulling="2026-03-19 17:10:14.765902225 +0000 UTC m=+1777.911959765" observedRunningTime="2026-03-19 17:10:15.229636855 +0000 UTC m=+1778.375694395" watchObservedRunningTime="2026-03-19 17:10:15.237663856 +0000 UTC m=+1778.383721396" Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.265654 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.265633243 podStartE2EDuration="2.265633243s" podCreationTimestamp="2026-03-19 17:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:10:15.254502538 +0000 UTC m=+1778.400560078" watchObservedRunningTime="2026-03-19 17:10:15.265633243 +0000 UTC m=+1778.411690783" Mar 19 17:10:15 crc kubenswrapper[4792]: I0319 17:10:15.754803 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e964f368-62cf-4886-a9be-d8536db1ee92" path="/var/lib/kubelet/pods/e964f368-62cf-4886-a9be-d8536db1ee92/volumes" Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.230132 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerID="38f4dc509c4de4086d81c080a5bc3674b3855afd052c101d9616e4f178f23030" exitCode=0 Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.230165 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerID="5aca78e91c27c60e51b5222cd7268fb1f170e79d2a61f61d2e7cbdc08a60e901" exitCode=2 Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.230172 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerID="fe27ba4ea0fedff04363a0fdca368804063f4a47e94f1dcd031115cea9503a76" exitCode=0 Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.230206 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerDied","Data":"38f4dc509c4de4086d81c080a5bc3674b3855afd052c101d9616e4f178f23030"} Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.230264 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerDied","Data":"5aca78e91c27c60e51b5222cd7268fb1f170e79d2a61f61d2e7cbdc08a60e901"} Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.230274 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerDied","Data":"fe27ba4ea0fedff04363a0fdca368804063f4a47e94f1dcd031115cea9503a76"} Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.235290 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5448145-bfc4-4a5e-a7d8-939b985b2272","Type":"ContainerStarted","Data":"aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7"} Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.235346 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5448145-bfc4-4a5e-a7d8-939b985b2272","Type":"ContainerStarted","Data":"dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90"} Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.264175 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.2641497839999998 podStartE2EDuration="2.264149784s" podCreationTimestamp="2026-03-19 17:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:10:16.252560415 +0000 UTC m=+1779.398617945" watchObservedRunningTime="2026-03-19 17:10:16.264149784 +0000 UTC m=+1779.410207334" Mar 19 17:10:16 crc kubenswrapper[4792]: I0319 17:10:16.740625 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:10:16 crc kubenswrapper[4792]: E0319 17:10:16.741169 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.249349 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerID="4bc5a05413e5e2b531a68186d33151c9ba55e703e341a758957000414e23a12e" exitCode=0 Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.249422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerDied","Data":"4bc5a05413e5e2b531a68186d33151c9ba55e703e341a758957000414e23a12e"} Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.249733 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ffe8952-c448-4573-b21e-27a0db808dd5","Type":"ContainerDied","Data":"b5449889568ce4c0689d9a1674dc2d12c422ef4b859532e46d9d606e1870dea2"} Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.249753 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5449889568ce4c0689d9a1674dc2d12c422ef4b859532e46d9d606e1870dea2" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.297145 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.387286 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-log-httpd\") pod \"6ffe8952-c448-4573-b21e-27a0db808dd5\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.387452 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfqlm\" (UniqueName: \"kubernetes.io/projected/6ffe8952-c448-4573-b21e-27a0db808dd5-kube-api-access-qfqlm\") pod \"6ffe8952-c448-4573-b21e-27a0db808dd5\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.387545 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-run-httpd\") pod \"6ffe8952-c448-4573-b21e-27a0db808dd5\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.388005 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6ffe8952-c448-4573-b21e-27a0db808dd5" (UID: "6ffe8952-c448-4573-b21e-27a0db808dd5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.388070 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6ffe8952-c448-4573-b21e-27a0db808dd5" (UID: "6ffe8952-c448-4573-b21e-27a0db808dd5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.388111 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-combined-ca-bundle\") pod \"6ffe8952-c448-4573-b21e-27a0db808dd5\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.388463 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-sg-core-conf-yaml\") pod \"6ffe8952-c448-4573-b21e-27a0db808dd5\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.388644 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-config-data\") pod \"6ffe8952-c448-4573-b21e-27a0db808dd5\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.388688 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-scripts\") pod \"6ffe8952-c448-4573-b21e-27a0db808dd5\" (UID: \"6ffe8952-c448-4573-b21e-27a0db808dd5\") " Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.390566 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.390595 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ffe8952-c448-4573-b21e-27a0db808dd5-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.398219 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ffe8952-c448-4573-b21e-27a0db808dd5-kube-api-access-qfqlm" (OuterVolumeSpecName: "kube-api-access-qfqlm") pod "6ffe8952-c448-4573-b21e-27a0db808dd5" (UID: "6ffe8952-c448-4573-b21e-27a0db808dd5"). InnerVolumeSpecName "kube-api-access-qfqlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.418136 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-scripts" (OuterVolumeSpecName: "scripts") pod "6ffe8952-c448-4573-b21e-27a0db808dd5" (UID: "6ffe8952-c448-4573-b21e-27a0db808dd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.479590 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6ffe8952-c448-4573-b21e-27a0db808dd5" (UID: "6ffe8952-c448-4573-b21e-27a0db808dd5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.493148 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.493179 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfqlm\" (UniqueName: \"kubernetes.io/projected/6ffe8952-c448-4573-b21e-27a0db808dd5-kube-api-access-qfqlm\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.493188 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.558443 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-config-data" (OuterVolumeSpecName: "config-data") pod "6ffe8952-c448-4573-b21e-27a0db808dd5" (UID: "6ffe8952-c448-4573-b21e-27a0db808dd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.597494 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.599043 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ffe8952-c448-4573-b21e-27a0db808dd5" (UID: "6ffe8952-c448-4573-b21e-27a0db808dd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.637002 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.699446 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe8952-c448-4573-b21e-27a0db808dd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.713867 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-tl8v2"] Mar 19 17:10:17 crc kubenswrapper[4792]: I0319 17:10:17.714109 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" podUID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" containerName="dnsmasq-dns" containerID="cri-o://ccff9e422e1a9d58bd786a2dd3137e8f3e3b4b668187b2d1fb3dc36948fcb04b" gracePeriod=10 Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.271165 4792 generic.go:334] "Generic (PLEG): container finished" podID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" containerID="ccff9e422e1a9d58bd786a2dd3137e8f3e3b4b668187b2d1fb3dc36948fcb04b" exitCode=0 Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.271380 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" event={"ID":"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694","Type":"ContainerDied","Data":"ccff9e422e1a9d58bd786a2dd3137e8f3e3b4b668187b2d1fb3dc36948fcb04b"} Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.271574 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.297439 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.318513 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.334467 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:18 crc kubenswrapper[4792]: E0319 17:10:18.335133 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="ceilometer-central-agent" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.335155 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="ceilometer-central-agent" Mar 19 17:10:18 crc kubenswrapper[4792]: E0319 17:10:18.335211 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="ceilometer-notification-agent" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.335221 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="ceilometer-notification-agent" Mar 19 17:10:18 crc kubenswrapper[4792]: E0319 17:10:18.335238 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="sg-core" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.335247 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="sg-core" Mar 19 17:10:18 crc kubenswrapper[4792]: E0319 17:10:18.335267 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="proxy-httpd" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.335275 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="proxy-httpd" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.335543 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="ceilometer-notification-agent" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.335571 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="proxy-httpd" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.335591 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="ceilometer-central-agent" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.335607 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" containerName="sg-core" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.337971 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.340049 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.340277 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.351726 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.456700 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.515233 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-config-data\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.515285 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-scripts\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.515306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.515329 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-log-httpd\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.515397 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.515441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6br\" (UniqueName: \"kubernetes.io/projected/a44e6522-9196-42bd-9162-88b9f03a0b21-kube-api-access-px6br\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.515485 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-run-httpd\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.617305 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-swift-storage-0\") pod \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.617686 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-config\") pod \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.617795 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-nb\") pod \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.617815 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-sb\") pod \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.618270 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mp6m\" (UniqueName: \"kubernetes.io/projected/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-kube-api-access-8mp6m\") pod \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.618430 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-svc\") pod \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.618764 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-run-httpd\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.619003 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-config-data\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.619060 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-scripts\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.619086 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.619111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-log-httpd\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.619392 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.619452 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px6br\" (UniqueName: \"kubernetes.io/projected/a44e6522-9196-42bd-9162-88b9f03a0b21-kube-api-access-px6br\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.620093 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-log-httpd\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.620223 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-run-httpd\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.623024 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-scripts\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.623418 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-config-data\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.623968 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.626177 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.638803 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px6br\" (UniqueName: \"kubernetes.io/projected/a44e6522-9196-42bd-9162-88b9f03a0b21-kube-api-access-px6br\") pod \"ceilometer-0\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " pod="openstack/ceilometer-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.652248 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.655200 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-kube-api-access-8mp6m" (OuterVolumeSpecName: "kube-api-access-8mp6m") pod "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" (UID: "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694"). InnerVolumeSpecName "kube-api-access-8mp6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.691656 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-config" (OuterVolumeSpecName: "config") pod "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" (UID: "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.705975 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" (UID: "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.706422 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" (UID: "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.713281 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" (UID: "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.720568 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" (UID: "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.723961 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-svc\") pod \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\" (UID: \"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694\") " Mar 19 17:10:18 crc kubenswrapper[4792]: W0319 17:10:18.724089 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694/volumes/kubernetes.io~configmap/dns-svc Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.724107 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" (UID: "70aad588-cf2b-4eb8-ac10-c8b6a1bd0694"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.725204 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.725229 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.725240 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.725253 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mp6m\" (UniqueName: \"kubernetes.io/projected/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-kube-api-access-8mp6m\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.725264 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.725273 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:18 crc kubenswrapper[4792]: I0319 17:10:18.769312 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.285606 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" event={"ID":"70aad588-cf2b-4eb8-ac10-c8b6a1bd0694","Type":"ContainerDied","Data":"5e004a0bbe43aaa34e89d1f3e3400f09ba06c78e054b2b6355466cc88f7e0755"} Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.286000 4792 scope.go:117] "RemoveContainer" containerID="ccff9e422e1a9d58bd786a2dd3137e8f3e3b4b668187b2d1fb3dc36948fcb04b" Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.286185 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-tl8v2" Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.301815 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:19 crc kubenswrapper[4792]: W0319 17:10:19.302085 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda44e6522_9196_42bd_9162_88b9f03a0b21.slice/crio-1cc60d6cd6fb5f83b7bace3c3757a8f832fbd6c7add87b98aa2e870c24753e15 WatchSource:0}: Error finding container 1cc60d6cd6fb5f83b7bace3c3757a8f832fbd6c7add87b98aa2e870c24753e15: Status 404 returned error can't find the container with id 1cc60d6cd6fb5f83b7bace3c3757a8f832fbd6c7add87b98aa2e870c24753e15 Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.327610 4792 scope.go:117] "RemoveContainer" containerID="ad2f3b175610f4f85ecc572addd1cf98b6421286437f1290e509d09f40305367" Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.442702 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-tl8v2"] Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.454556 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-tl8v2"] Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.755001 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ffe8952-c448-4573-b21e-27a0db808dd5" path="/var/lib/kubelet/pods/6ffe8952-c448-4573-b21e-27a0db808dd5/volumes" Mar 19 17:10:19 crc kubenswrapper[4792]: I0319 17:10:19.755968 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" path="/var/lib/kubelet/pods/70aad588-cf2b-4eb8-ac10-c8b6a1bd0694/volumes" Mar 19 17:10:20 crc kubenswrapper[4792]: I0319 17:10:20.298817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerStarted","Data":"2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce"} Mar 19 17:10:20 crc kubenswrapper[4792]: I0319 17:10:20.299180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerStarted","Data":"1cc60d6cd6fb5f83b7bace3c3757a8f832fbd6c7add87b98aa2e870c24753e15"} Mar 19 17:10:21 crc kubenswrapper[4792]: I0319 17:10:21.311523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerStarted","Data":"7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86"} Mar 19 17:10:22 crc kubenswrapper[4792]: I0319 17:10:22.323591 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerStarted","Data":"2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9"} Mar 19 17:10:23 crc kubenswrapper[4792]: I0319 17:10:23.652882 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:23 crc kubenswrapper[4792]: I0319 17:10:23.694562 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.347043 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerStarted","Data":"2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b"} Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.347367 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.379393 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.076576771 podStartE2EDuration="6.37937405s" podCreationTimestamp="2026-03-19 17:10:18 +0000 UTC" firstStartedPulling="2026-03-19 17:10:19.304957269 +0000 UTC m=+1782.451014809" lastFinishedPulling="2026-03-19 17:10:23.607754548 +0000 UTC m=+1786.753812088" observedRunningTime="2026-03-19 17:10:24.367918295 +0000 UTC m=+1787.513975835" watchObservedRunningTime="2026-03-19 17:10:24.37937405 +0000 UTC m=+1787.525431580" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.389188 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.604406 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.605247 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.721016 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-zb8c4"] Mar 19 17:10:24 crc kubenswrapper[4792]: E0319 17:10:24.721551 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" containerName="init" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.721564 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" containerName="init" Mar 19 17:10:24 crc kubenswrapper[4792]: E0319 17:10:24.721593 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" containerName="dnsmasq-dns" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.721599 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" containerName="dnsmasq-dns" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.721903 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="70aad588-cf2b-4eb8-ac10-c8b6a1bd0694" containerName="dnsmasq-dns" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.722680 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.725720 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.726019 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.740510 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zb8c4"] Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.810857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.811125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-config-data\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.811273 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsgb\" (UniqueName: \"kubernetes.io/projected/79cca8f9-a74b-422e-bded-61895a61cafc-kube-api-access-wpsgb\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.811409 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-scripts\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.914467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-scripts\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.915043 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.915207 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-config-data\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.915501 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsgb\" (UniqueName: \"kubernetes.io/projected/79cca8f9-a74b-422e-bded-61895a61cafc-kube-api-access-wpsgb\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.920789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.921541 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-scripts\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.924257 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-config-data\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:24 crc kubenswrapper[4792]: I0319 17:10:24.931924 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsgb\" (UniqueName: \"kubernetes.io/projected/79cca8f9-a74b-422e-bded-61895a61cafc-kube-api-access-wpsgb\") pod \"nova-cell1-cell-mapping-zb8c4\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:25 crc kubenswrapper[4792]: I0319 17:10:25.047812 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:25 crc kubenswrapper[4792]: I0319 17:10:25.617103 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:10:25 crc kubenswrapper[4792]: I0319 17:10:25.617341 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:10:25 crc kubenswrapper[4792]: I0319 17:10:25.669789 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zb8c4"] Mar 19 17:10:26 crc kubenswrapper[4792]: I0319 17:10:26.374920 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zb8c4" event={"ID":"79cca8f9-a74b-422e-bded-61895a61cafc","Type":"ContainerStarted","Data":"adb066379e5212825b668ce24f65347b0967df724c90ad6fe784edcd25a7a904"} Mar 19 17:10:26 crc kubenswrapper[4792]: I0319 17:10:26.375233 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zb8c4" event={"ID":"79cca8f9-a74b-422e-bded-61895a61cafc","Type":"ContainerStarted","Data":"dcdb2dd26b4e8adb04e61815a0b3ab0d1087797d015c4cb64891e605a769dc6d"} Mar 19 17:10:26 crc kubenswrapper[4792]: I0319 17:10:26.412563 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-zb8c4" podStartSLOduration=2.412544333 podStartE2EDuration="2.412544333s" podCreationTimestamp="2026-03-19 17:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:10:26.407178616 +0000 UTC m=+1789.553236166" watchObservedRunningTime="2026-03-19 17:10:26.412544333 +0000 UTC m=+1789.558601873" Mar 19 17:10:29 crc kubenswrapper[4792]: I0319 17:10:29.739443 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:10:29 crc kubenswrapper[4792]: E0319 17:10:29.740344 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:10:31 crc kubenswrapper[4792]: I0319 17:10:31.427959 4792 generic.go:334] "Generic (PLEG): container finished" podID="79cca8f9-a74b-422e-bded-61895a61cafc" containerID="adb066379e5212825b668ce24f65347b0967df724c90ad6fe784edcd25a7a904" exitCode=0 Mar 19 17:10:31 crc kubenswrapper[4792]: I0319 17:10:31.427998 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zb8c4" event={"ID":"79cca8f9-a74b-422e-bded-61895a61cafc","Type":"ContainerDied","Data":"adb066379e5212825b668ce24f65347b0967df724c90ad6fe784edcd25a7a904"} Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.441747 4792 generic.go:334] "Generic (PLEG): container finished" podID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerID="03768a66eac5cea8438b6aa509ee6a5c2533893528703cb3a40c6839aa3ff249" exitCode=137 Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.442228 4792 generic.go:334] "Generic (PLEG): container finished" podID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerID="a5819ecc588c431f8a724682fd8f13a1eed5dc7d0490b65174272becc7b549ee" exitCode=137 Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.441813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerDied","Data":"03768a66eac5cea8438b6aa509ee6a5c2533893528703cb3a40c6839aa3ff249"} Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.442333 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerDied","Data":"a5819ecc588c431f8a724682fd8f13a1eed5dc7d0490b65174272becc7b549ee"} Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.442346 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d6af4dd-ea78-485e-bb95-dc92993ed452","Type":"ContainerDied","Data":"d70f0b5c72bd5c908012a65c29d325aa83339ace18ce61bc0a421e9fc05bed33"} Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.442355 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d70f0b5c72bd5c908012a65c29d325aa83339ace18ce61bc0a421e9fc05bed33" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.538218 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.603422 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.603470 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.704699 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-config-data\") pod \"7d6af4dd-ea78-485e-bb95-dc92993ed452\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.704805 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f5th\" (UniqueName: \"kubernetes.io/projected/7d6af4dd-ea78-485e-bb95-dc92993ed452-kube-api-access-2f5th\") pod \"7d6af4dd-ea78-485e-bb95-dc92993ed452\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.704927 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-combined-ca-bundle\") pod \"7d6af4dd-ea78-485e-bb95-dc92993ed452\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.705092 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-scripts\") pod \"7d6af4dd-ea78-485e-bb95-dc92993ed452\" (UID: \"7d6af4dd-ea78-485e-bb95-dc92993ed452\") " Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.715075 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6af4dd-ea78-485e-bb95-dc92993ed452-kube-api-access-2f5th" (OuterVolumeSpecName: "kube-api-access-2f5th") pod "7d6af4dd-ea78-485e-bb95-dc92993ed452" (UID: "7d6af4dd-ea78-485e-bb95-dc92993ed452"). InnerVolumeSpecName "kube-api-access-2f5th". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.716321 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-scripts" (OuterVolumeSpecName: "scripts") pod "7d6af4dd-ea78-485e-bb95-dc92993ed452" (UID: "7d6af4dd-ea78-485e-bb95-dc92993ed452"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.808921 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.808963 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f5th\" (UniqueName: \"kubernetes.io/projected/7d6af4dd-ea78-485e-bb95-dc92993ed452-kube-api-access-2f5th\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.848841 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d6af4dd-ea78-485e-bb95-dc92993ed452" (UID: "7d6af4dd-ea78-485e-bb95-dc92993ed452"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.851623 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.861329 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-config-data" (OuterVolumeSpecName: "config-data") pod "7d6af4dd-ea78-485e-bb95-dc92993ed452" (UID: "7d6af4dd-ea78-485e-bb95-dc92993ed452"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.926089 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:32 crc kubenswrapper[4792]: I0319 17:10:32.926222 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6af4dd-ea78-485e-bb95-dc92993ed452-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.027722 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpsgb\" (UniqueName: \"kubernetes.io/projected/79cca8f9-a74b-422e-bded-61895a61cafc-kube-api-access-wpsgb\") pod \"79cca8f9-a74b-422e-bded-61895a61cafc\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.027903 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-config-data\") pod \"79cca8f9-a74b-422e-bded-61895a61cafc\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.027963 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-scripts\") pod \"79cca8f9-a74b-422e-bded-61895a61cafc\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.028079 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-combined-ca-bundle\") pod \"79cca8f9-a74b-422e-bded-61895a61cafc\" (UID: \"79cca8f9-a74b-422e-bded-61895a61cafc\") " Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.038041 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-scripts" (OuterVolumeSpecName: "scripts") pod "79cca8f9-a74b-422e-bded-61895a61cafc" (UID: "79cca8f9-a74b-422e-bded-61895a61cafc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.050139 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79cca8f9-a74b-422e-bded-61895a61cafc-kube-api-access-wpsgb" (OuterVolumeSpecName: "kube-api-access-wpsgb") pod "79cca8f9-a74b-422e-bded-61895a61cafc" (UID: "79cca8f9-a74b-422e-bded-61895a61cafc"). InnerVolumeSpecName "kube-api-access-wpsgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.127112 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79cca8f9-a74b-422e-bded-61895a61cafc" (UID: "79cca8f9-a74b-422e-bded-61895a61cafc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.133981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-config-data" (OuterVolumeSpecName: "config-data") pod "79cca8f9-a74b-422e-bded-61895a61cafc" (UID: "79cca8f9-a74b-422e-bded-61895a61cafc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.135461 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.135505 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.135520 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cca8f9-a74b-422e-bded-61895a61cafc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.135533 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpsgb\" (UniqueName: \"kubernetes.io/projected/79cca8f9-a74b-422e-bded-61895a61cafc-kube-api-access-wpsgb\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.458623 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zb8c4" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.458662 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.458621 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zb8c4" event={"ID":"79cca8f9-a74b-422e-bded-61895a61cafc","Type":"ContainerDied","Data":"dcdb2dd26b4e8adb04e61815a0b3ab0d1087797d015c4cb64891e605a769dc6d"} Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.458945 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcdb2dd26b4e8adb04e61815a0b3ab0d1087797d015c4cb64891e605a769dc6d" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.555593 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.580875 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.608371 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.609304 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-evaluator" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609327 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-evaluator" Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.609359 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-api" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609365 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-api" Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.609382 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-listener" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609388 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-listener" Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.609440 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cca8f9-a74b-422e-bded-61895a61cafc" containerName="nova-manage" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609447 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cca8f9-a74b-422e-bded-61895a61cafc" containerName="nova-manage" Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.609459 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-notifier" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609466 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-notifier" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609711 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-api" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609731 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-notifier" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609747 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-evaluator" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609758 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="79cca8f9-a74b-422e-bded-61895a61cafc" containerName="nova-manage" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.609771 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" containerName="aodh-listener" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.613284 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.620031 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.620323 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.620327 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.620446 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.620523 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gdn5p" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.627064 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.649681 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.649974 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0bd5da2b-f2ec-4313-a738-63373d968a78" containerName="nova-scheduler-scheduler" containerID="cri-o://3249fefed58a692dd5018e2b2e71bb073da9292d6de0b2b0889eb23453de60f2" gracePeriod=30 Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.663334 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.663599 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-log" containerID="cri-o://dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90" gracePeriod=30 Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.663746 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-api" containerID="cri-o://aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7" gracePeriod=30 Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.698737 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.699494 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-log" containerID="cri-o://bde8d4cb05f66c4e8fe49ae1e4c2c0325aa27edc0acddb72d8cbfedff8947fa5" gracePeriod=30 Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.699630 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-metadata" containerID="cri-o://98c6d049a8c997ad0114db655559f053f2cdb78e59de8e1a77127e87b29df356" gracePeriod=30 Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.753138 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-config-data\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.753256 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-scripts\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.753294 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.753330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8h6\" (UniqueName: \"kubernetes.io/projected/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-kube-api-access-9x8h6\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.753434 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-internal-tls-certs\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.753517 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-public-tls-certs\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.753585 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6af4dd-ea78-485e-bb95-dc92993ed452" path="/var/lib/kubelet/pods/7d6af4dd-ea78-485e-bb95-dc92993ed452/volumes" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.855046 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-public-tls-certs\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.855146 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-config-data\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.855239 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-scripts\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.855283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.855319 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8h6\" (UniqueName: \"kubernetes.io/projected/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-kube-api-access-9x8h6\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.855478 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-internal-tls-certs\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.860046 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-internal-tls-certs\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.860477 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-public-tls-certs\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.860800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-scripts\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.861188 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-config-data\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.865437 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.875693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8h6\" (UniqueName: \"kubernetes.io/projected/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-kube-api-access-9x8h6\") pod \"aodh-0\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: I0319 17:10:33.948434 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.989307 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3249fefed58a692dd5018e2b2e71bb073da9292d6de0b2b0889eb23453de60f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.992642 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3249fefed58a692dd5018e2b2e71bb073da9292d6de0b2b0889eb23453de60f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.994143 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3249fefed58a692dd5018e2b2e71bb073da9292d6de0b2b0889eb23453de60f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 17:10:33 crc kubenswrapper[4792]: E0319 17:10:33.994207 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0bd5da2b-f2ec-4313-a738-63373d968a78" containerName="nova-scheduler-scheduler" Mar 19 17:10:34 crc kubenswrapper[4792]: W0319 17:10:34.433905 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode89f502e_a41f_45ca_89ef_93a4f4ac4f62.slice/crio-01c3117975fbaa47f8923a88fcee93aad4fde4b4434027baea1ae86323afd312 WatchSource:0}: Error finding container 01c3117975fbaa47f8923a88fcee93aad4fde4b4434027baea1ae86323afd312: Status 404 returned error can't find the container with id 01c3117975fbaa47f8923a88fcee93aad4fde4b4434027baea1ae86323afd312 Mar 19 17:10:34 crc kubenswrapper[4792]: I0319 17:10:34.438467 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 19 17:10:34 crc kubenswrapper[4792]: I0319 17:10:34.474792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerStarted","Data":"01c3117975fbaa47f8923a88fcee93aad4fde4b4434027baea1ae86323afd312"} Mar 19 17:10:34 crc kubenswrapper[4792]: I0319 17:10:34.480233 4792 generic.go:334] "Generic (PLEG): container finished" podID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerID="bde8d4cb05f66c4e8fe49ae1e4c2c0325aa27edc0acddb72d8cbfedff8947fa5" exitCode=143 Mar 19 17:10:34 crc kubenswrapper[4792]: I0319 17:10:34.480299 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4","Type":"ContainerDied","Data":"bde8d4cb05f66c4e8fe49ae1e4c2c0325aa27edc0acddb72d8cbfedff8947fa5"} Mar 19 17:10:34 crc kubenswrapper[4792]: I0319 17:10:34.483668 4792 generic.go:334] "Generic (PLEG): container finished" podID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerID="dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90" exitCode=143 Mar 19 17:10:34 crc kubenswrapper[4792]: I0319 17:10:34.483710 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5448145-bfc4-4a5e-a7d8-939b985b2272","Type":"ContainerDied","Data":"dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90"} Mar 19 17:10:35 crc kubenswrapper[4792]: I0319 17:10:35.497447 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerStarted","Data":"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb"} Mar 19 17:10:36 crc kubenswrapper[4792]: I0319 17:10:36.523365 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerStarted","Data":"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c"} Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.338921 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.439789 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-combined-ca-bundle\") pod \"f5448145-bfc4-4a5e-a7d8-939b985b2272\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.440330 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5448145-bfc4-4a5e-a7d8-939b985b2272-logs\") pod \"f5448145-bfc4-4a5e-a7d8-939b985b2272\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.440486 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-public-tls-certs\") pod \"f5448145-bfc4-4a5e-a7d8-939b985b2272\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.440640 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-internal-tls-certs\") pod \"f5448145-bfc4-4a5e-a7d8-939b985b2272\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.440718 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-config-data\") pod \"f5448145-bfc4-4a5e-a7d8-939b985b2272\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.440757 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x97x7\" (UniqueName: \"kubernetes.io/projected/f5448145-bfc4-4a5e-a7d8-939b985b2272-kube-api-access-x97x7\") pod \"f5448145-bfc4-4a5e-a7d8-939b985b2272\" (UID: \"f5448145-bfc4-4a5e-a7d8-939b985b2272\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.441201 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5448145-bfc4-4a5e-a7d8-939b985b2272-logs" (OuterVolumeSpecName: "logs") pod "f5448145-bfc4-4a5e-a7d8-939b985b2272" (UID: "f5448145-bfc4-4a5e-a7d8-939b985b2272"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.441810 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5448145-bfc4-4a5e-a7d8-939b985b2272-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.446008 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5448145-bfc4-4a5e-a7d8-939b985b2272-kube-api-access-x97x7" (OuterVolumeSpecName: "kube-api-access-x97x7") pod "f5448145-bfc4-4a5e-a7d8-939b985b2272" (UID: "f5448145-bfc4-4a5e-a7d8-939b985b2272"). InnerVolumeSpecName "kube-api-access-x97x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.472765 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-config-data" (OuterVolumeSpecName: "config-data") pod "f5448145-bfc4-4a5e-a7d8-939b985b2272" (UID: "f5448145-bfc4-4a5e-a7d8-939b985b2272"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.481020 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5448145-bfc4-4a5e-a7d8-939b985b2272" (UID: "f5448145-bfc4-4a5e-a7d8-939b985b2272"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.516447 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f5448145-bfc4-4a5e-a7d8-939b985b2272" (UID: "f5448145-bfc4-4a5e-a7d8-939b985b2272"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.518633 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f5448145-bfc4-4a5e-a7d8-939b985b2272" (UID: "f5448145-bfc4-4a5e-a7d8-939b985b2272"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.536796 4792 generic.go:334] "Generic (PLEG): container finished" podID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerID="98c6d049a8c997ad0114db655559f053f2cdb78e59de8e1a77127e87b29df356" exitCode=0 Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.536957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4","Type":"ContainerDied","Data":"98c6d049a8c997ad0114db655559f053f2cdb78e59de8e1a77127e87b29df356"} Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.536991 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4","Type":"ContainerDied","Data":"2bd54773754edd15ad62601fad0ac3b5e5a00e2214794959d774d85bf9a74860"} Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.537026 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bd54773754edd15ad62601fad0ac3b5e5a00e2214794959d774d85bf9a74860" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.539073 4792 generic.go:334] "Generic (PLEG): container finished" podID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerID="aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7" exitCode=0 Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.539138 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5448145-bfc4-4a5e-a7d8-939b985b2272","Type":"ContainerDied","Data":"aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7"} Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.539163 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5448145-bfc4-4a5e-a7d8-939b985b2272","Type":"ContainerDied","Data":"909767478c71bb7b6e28bafaecf358886137772571ae6526b97914fb441eef56"} Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.539181 4792 scope.go:117] "RemoveContainer" containerID="aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.539313 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.543609 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerStarted","Data":"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3"} Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.543638 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerStarted","Data":"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae"} Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.544703 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.544726 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.544735 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.544744 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x97x7\" (UniqueName: \"kubernetes.io/projected/f5448145-bfc4-4a5e-a7d8-939b985b2272-kube-api-access-x97x7\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.544755 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5448145-bfc4-4a5e-a7d8-939b985b2272-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.577959 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.916624604 podStartE2EDuration="4.577944301s" podCreationTimestamp="2026-03-19 17:10:33 +0000 UTC" firstStartedPulling="2026-03-19 17:10:34.436442382 +0000 UTC m=+1797.582499922" lastFinishedPulling="2026-03-19 17:10:37.097762079 +0000 UTC m=+1800.243819619" observedRunningTime="2026-03-19 17:10:37.575086742 +0000 UTC m=+1800.721144282" watchObservedRunningTime="2026-03-19 17:10:37.577944301 +0000 UTC m=+1800.724001841" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.637583 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.668722 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.668752 4792 scope.go:117] "RemoveContainer" containerID="dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.707444 4792 scope.go:117] "RemoveContainer" containerID="aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7" Mar 19 17:10:37 crc kubenswrapper[4792]: E0319 17:10:37.709952 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7\": container with ID starting with aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7 not found: ID does not exist" containerID="aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.709985 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7"} err="failed to get container status \"aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7\": rpc error: code = NotFound desc = could not find container \"aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7\": container with ID starting with aae74720cc7494873ef79d416d8c43f487617ca8e33008367c503a055141c6e7 not found: ID does not exist" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.710014 4792 scope.go:117] "RemoveContainer" containerID="dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.712519 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:37 crc kubenswrapper[4792]: E0319 17:10:37.714982 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90\": container with ID starting with dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90 not found: ID does not exist" containerID="dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.715023 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90"} err="failed to get container status \"dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90\": rpc error: code = NotFound desc = could not find container \"dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90\": container with ID starting with dabcd5284a62a2c32a7cef286124f3b120baeb9e8e47a404778dbfbf62e15f90 not found: ID does not exist" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.726372 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:37 crc kubenswrapper[4792]: E0319 17:10:37.727193 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-log" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.727380 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-log" Mar 19 17:10:37 crc kubenswrapper[4792]: E0319 17:10:37.727538 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-api" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.727651 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-api" Mar 19 17:10:37 crc kubenswrapper[4792]: E0319 17:10:37.727762 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-metadata" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.727883 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-metadata" Mar 19 17:10:37 crc kubenswrapper[4792]: E0319 17:10:37.727998 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-log" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.728112 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-log" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.728506 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-api" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.728610 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" containerName="nova-api-log" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.728683 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-metadata" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.728760 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" containerName="nova-metadata-log" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.731247 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.736452 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.736583 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.738142 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.748795 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5b4c\" (UniqueName: \"kubernetes.io/projected/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-kube-api-access-j5b4c\") pod \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.749335 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-logs\") pod \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.749373 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-combined-ca-bundle\") pod \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.749589 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-nova-metadata-tls-certs\") pod \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.749761 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-config-data\") pod \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\" (UID: \"a04452c3-e2d0-4bfd-96e9-8e78807b4fb4\") " Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.751519 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-logs" (OuterVolumeSpecName: "logs") pod "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" (UID: "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.784193 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-kube-api-access-j5b4c" (OuterVolumeSpecName: "kube-api-access-j5b4c") pod "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" (UID: "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4"). InnerVolumeSpecName "kube-api-access-j5b4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.793961 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5448145-bfc4-4a5e-a7d8-939b985b2272" path="/var/lib/kubelet/pods/f5448145-bfc4-4a5e-a7d8-939b985b2272/volumes" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.806150 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.819943 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" (UID: "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.835722 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-config-data" (OuterVolumeSpecName: "config-data") pod "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" (UID: "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.853563 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64q4\" (UniqueName: \"kubernetes.io/projected/cd06887b-abf2-4787-9c4e-db0eed74d8ca-kube-api-access-g64q4\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.853622 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd06887b-abf2-4787-9c4e-db0eed74d8ca-logs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.854014 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.854048 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-config-data\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.854136 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.854223 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.854335 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.854356 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5b4c\" (UniqueName: \"kubernetes.io/projected/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-kube-api-access-j5b4c\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.854370 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-logs\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.854383 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.865964 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" (UID: "a04452c3-e2d0-4bfd-96e9-8e78807b4fb4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.955875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.955917 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-config-data\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.955958 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.956001 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.956054 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g64q4\" (UniqueName: \"kubernetes.io/projected/cd06887b-abf2-4787-9c4e-db0eed74d8ca-kube-api-access-g64q4\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.956075 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd06887b-abf2-4787-9c4e-db0eed74d8ca-logs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.956146 4792 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.956497 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd06887b-abf2-4787-9c4e-db0eed74d8ca-logs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.958966 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.959412 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-config-data\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.959676 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.960307 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd06887b-abf2-4787-9c4e-db0eed74d8ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:37 crc kubenswrapper[4792]: I0319 17:10:37.976560 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g64q4\" (UniqueName: \"kubernetes.io/projected/cd06887b-abf2-4787-9c4e-db0eed74d8ca-kube-api-access-g64q4\") pod \"nova-api-0\" (UID: \"cd06887b-abf2-4787-9c4e-db0eed74d8ca\") " pod="openstack/nova-api-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.078763 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.576828 4792 generic.go:334] "Generic (PLEG): container finished" podID="0bd5da2b-f2ec-4313-a738-63373d968a78" containerID="3249fefed58a692dd5018e2b2e71bb073da9292d6de0b2b0889eb23453de60f2" exitCode=0 Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.577335 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0bd5da2b-f2ec-4313-a738-63373d968a78","Type":"ContainerDied","Data":"3249fefed58a692dd5018e2b2e71bb073da9292d6de0b2b0889eb23453de60f2"} Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.578883 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.600663 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.644640 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.692216 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.731980 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.736821 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.741106 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.741361 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.807564 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.891562 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-config-data\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.891639 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdgm\" (UniqueName: \"kubernetes.io/projected/d5ac4a92-3577-4b41-8b74-2598a64d131c-kube-api-access-8vdgm\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.891728 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.891767 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ac4a92-3577-4b41-8b74-2598a64d131c-logs\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.891813 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.934310 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.997670 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vdgm\" (UniqueName: \"kubernetes.io/projected/d5ac4a92-3577-4b41-8b74-2598a64d131c-kube-api-access-8vdgm\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.998131 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.998272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ac4a92-3577-4b41-8b74-2598a64d131c-logs\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:38 crc kubenswrapper[4792]: I0319 17:10:38.998808 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ac4a92-3577-4b41-8b74-2598a64d131c-logs\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.001157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.001577 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.002376 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-config-data\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.004615 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.006913 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac4a92-3577-4b41-8b74-2598a64d131c-config-data\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.029805 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vdgm\" (UniqueName: \"kubernetes.io/projected/d5ac4a92-3577-4b41-8b74-2598a64d131c-kube-api-access-8vdgm\") pod \"nova-metadata-0\" (UID: \"d5ac4a92-3577-4b41-8b74-2598a64d131c\") " pod="openstack/nova-metadata-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.104140 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b29rb\" (UniqueName: \"kubernetes.io/projected/0bd5da2b-f2ec-4313-a738-63373d968a78-kube-api-access-b29rb\") pod \"0bd5da2b-f2ec-4313-a738-63373d968a78\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.104379 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-config-data\") pod \"0bd5da2b-f2ec-4313-a738-63373d968a78\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.104464 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-combined-ca-bundle\") pod \"0bd5da2b-f2ec-4313-a738-63373d968a78\" (UID: \"0bd5da2b-f2ec-4313-a738-63373d968a78\") " Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.111642 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd5da2b-f2ec-4313-a738-63373d968a78-kube-api-access-b29rb" (OuterVolumeSpecName: "kube-api-access-b29rb") pod "0bd5da2b-f2ec-4313-a738-63373d968a78" (UID: "0bd5da2b-f2ec-4313-a738-63373d968a78"). InnerVolumeSpecName "kube-api-access-b29rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.135293 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.164067 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-config-data" (OuterVolumeSpecName: "config-data") pod "0bd5da2b-f2ec-4313-a738-63373d968a78" (UID: "0bd5da2b-f2ec-4313-a738-63373d968a78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.164276 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bd5da2b-f2ec-4313-a738-63373d968a78" (UID: "0bd5da2b-f2ec-4313-a738-63373d968a78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.208163 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.208201 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd5da2b-f2ec-4313-a738-63373d968a78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.208221 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b29rb\" (UniqueName: \"kubernetes.io/projected/0bd5da2b-f2ec-4313-a738-63373d968a78-kube-api-access-b29rb\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.591234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0bd5da2b-f2ec-4313-a738-63373d968a78","Type":"ContainerDied","Data":"b659793e0c6c126951a95a4877365d87ea079712c2068693b3d9f693d492f940"} Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.591275 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.591551 4792 scope.go:117] "RemoveContainer" containerID="3249fefed58a692dd5018e2b2e71bb073da9292d6de0b2b0889eb23453de60f2" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.593635 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd06887b-abf2-4787-9c4e-db0eed74d8ca","Type":"ContainerStarted","Data":"0555d3a1f1f7bcacce163f4a61be246963bf9fd2c7dfdf204e988a089c0840a2"} Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.593667 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd06887b-abf2-4787-9c4e-db0eed74d8ca","Type":"ContainerStarted","Data":"265a96aaee3ab20b4588e88ff9af758308e610817f8309e57dc09aa6df07d8fc"} Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.593681 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd06887b-abf2-4787-9c4e-db0eed74d8ca","Type":"ContainerStarted","Data":"9e37a572bb292c941792765c11b6bbeaa6a821a5a403c9dc26591b29592cf087"} Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.629585 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.632248 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.632236535 podStartE2EDuration="2.632236535s" podCreationTimestamp="2026-03-19 17:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:10:39.625389307 +0000 UTC m=+1802.771446837" watchObservedRunningTime="2026-03-19 17:10:39.632236535 +0000 UTC m=+1802.778294075" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.670223 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.682462 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.761437 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd5da2b-f2ec-4313-a738-63373d968a78" path="/var/lib/kubelet/pods/0bd5da2b-f2ec-4313-a738-63373d968a78/volumes" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.762734 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04452c3-e2d0-4bfd-96e9-8e78807b4fb4" path="/var/lib/kubelet/pods/a04452c3-e2d0-4bfd-96e9-8e78807b4fb4/volumes" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.763426 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:10:39 crc kubenswrapper[4792]: E0319 17:10:39.764262 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd5da2b-f2ec-4313-a738-63373d968a78" containerName="nova-scheduler-scheduler" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.764282 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd5da2b-f2ec-4313-a738-63373d968a78" containerName="nova-scheduler-scheduler" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.764510 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd5da2b-f2ec-4313-a738-63373d968a78" containerName="nova-scheduler-scheduler" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.765256 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.765809 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.769569 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.931959 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b384386-fda0-42ba-9b7b-ddb790da02b5-config-data\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.932037 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxrgh\" (UniqueName: \"kubernetes.io/projected/7b384386-fda0-42ba-9b7b-ddb790da02b5-kube-api-access-pxrgh\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:39 crc kubenswrapper[4792]: I0319 17:10:39.932294 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b384386-fda0-42ba-9b7b-ddb790da02b5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.035956 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b384386-fda0-42ba-9b7b-ddb790da02b5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.036079 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b384386-fda0-42ba-9b7b-ddb790da02b5-config-data\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.036121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxrgh\" (UniqueName: \"kubernetes.io/projected/7b384386-fda0-42ba-9b7b-ddb790da02b5-kube-api-access-pxrgh\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.042322 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b384386-fda0-42ba-9b7b-ddb790da02b5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.048535 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b384386-fda0-42ba-9b7b-ddb790da02b5-config-data\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.055211 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxrgh\" (UniqueName: \"kubernetes.io/projected/7b384386-fda0-42ba-9b7b-ddb790da02b5-kube-api-access-pxrgh\") pod \"nova-scheduler-0\" (UID: \"7b384386-fda0-42ba-9b7b-ddb790da02b5\") " pod="openstack/nova-scheduler-0" Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.146320 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.611793 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.631624 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac4a92-3577-4b41-8b74-2598a64d131c","Type":"ContainerStarted","Data":"8e0a1be7125c5435585ddaf7f53bf5abeda9f22afd8b75ac7430e10ce0a1e62b"} Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.631659 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac4a92-3577-4b41-8b74-2598a64d131c","Type":"ContainerStarted","Data":"cc3710a5ec08635defb672295aa6cb83bef256b81fc4247a186404820e94bfaa"} Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.631670 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac4a92-3577-4b41-8b74-2598a64d131c","Type":"ContainerStarted","Data":"fb26c12370da67bef24f45776ad6213c3ef2aacbf4faf60bccb2f10b3e4f9cb2"} Mar 19 17:10:40 crc kubenswrapper[4792]: I0319 17:10:40.658516 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6584981069999998 podStartE2EDuration="2.658498107s" podCreationTimestamp="2026-03-19 17:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:10:40.656612835 +0000 UTC m=+1803.802670375" watchObservedRunningTime="2026-03-19 17:10:40.658498107 +0000 UTC m=+1803.804555647" Mar 19 17:10:41 crc kubenswrapper[4792]: I0319 17:10:41.646072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b384386-fda0-42ba-9b7b-ddb790da02b5","Type":"ContainerStarted","Data":"c3f20dec3546e1683a215604af225b6f138e6a208ca5f2a3267255c1e3b1215e"} Mar 19 17:10:41 crc kubenswrapper[4792]: I0319 17:10:41.646382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b384386-fda0-42ba-9b7b-ddb790da02b5","Type":"ContainerStarted","Data":"caa9eb189884ece990e64312f618ba5a54c55f78f5ef8f3fa5afef5f858670d3"} Mar 19 17:10:41 crc kubenswrapper[4792]: I0319 17:10:41.679180 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.679159365 podStartE2EDuration="2.679159365s" podCreationTimestamp="2026-03-19 17:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:10:41.66510373 +0000 UTC m=+1804.811161290" watchObservedRunningTime="2026-03-19 17:10:41.679159365 +0000 UTC m=+1804.825216925" Mar 19 17:10:41 crc kubenswrapper[4792]: I0319 17:10:41.740572 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:10:41 crc kubenswrapper[4792]: E0319 17:10:41.740929 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:10:45 crc kubenswrapper[4792]: I0319 17:10:45.146609 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 17:10:48 crc kubenswrapper[4792]: I0319 17:10:48.079325 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 17:10:48 crc kubenswrapper[4792]: I0319 17:10:48.079367 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 17:10:48 crc kubenswrapper[4792]: I0319 17:10:48.789356 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 19 17:10:49 crc kubenswrapper[4792]: I0319 17:10:49.092023 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cd06887b-abf2-4787-9c4e-db0eed74d8ca" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.21:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:10:49 crc kubenswrapper[4792]: I0319 17:10:49.092307 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cd06887b-abf2-4787-9c4e-db0eed74d8ca" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.21:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:10:49 crc kubenswrapper[4792]: I0319 17:10:49.135008 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 17:10:49 crc kubenswrapper[4792]: I0319 17:10:49.135521 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 17:10:50 crc kubenswrapper[4792]: I0319 17:10:50.146915 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 17:10:50 crc kubenswrapper[4792]: I0319 17:10:50.147019 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d5ac4a92-3577-4b41-8b74-2598a64d131c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.22:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:10:50 crc kubenswrapper[4792]: I0319 17:10:50.146982 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d5ac4a92-3577-4b41-8b74-2598a64d131c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.22:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 17:10:50 crc kubenswrapper[4792]: I0319 17:10:50.178694 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 17:10:50 crc kubenswrapper[4792]: I0319 17:10:50.562701 4792 scope.go:117] "RemoveContainer" containerID="9be402fd0a2ac903bcc6f1c090a28e25b7ae33423ae488bf771ec2dd01bf9ca1" Mar 19 17:10:50 crc kubenswrapper[4792]: I0319 17:10:50.791891 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 17:10:53 crc kubenswrapper[4792]: I0319 17:10:53.433829 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:10:53 crc kubenswrapper[4792]: I0319 17:10:53.434526 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a72bb0db-ba96-464e-84be-283010baf52c" containerName="kube-state-metrics" containerID="cri-o://aa5591dd145814d99d4e5532e8e11b5af69b7fd3bb1c2e38e4c9a0039ab20377" gracePeriod=30 Mar 19 17:10:53 crc kubenswrapper[4792]: I0319 17:10:53.572442 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:10:53 crc kubenswrapper[4792]: I0319 17:10:53.572964 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="77cb387a-c012-4955-a0a9-272badd02d11" containerName="mysqld-exporter" containerID="cri-o://98ab4eb98530907bac5570762736c74bf0f49059bddeb51536ade08609e12178" gracePeriod=30 Mar 19 17:10:53 crc kubenswrapper[4792]: I0319 17:10:53.823989 4792 generic.go:334] "Generic (PLEG): container finished" podID="77cb387a-c012-4955-a0a9-272badd02d11" containerID="98ab4eb98530907bac5570762736c74bf0f49059bddeb51536ade08609e12178" exitCode=2 Mar 19 17:10:53 crc kubenswrapper[4792]: I0319 17:10:53.824084 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"77cb387a-c012-4955-a0a9-272badd02d11","Type":"ContainerDied","Data":"98ab4eb98530907bac5570762736c74bf0f49059bddeb51536ade08609e12178"} Mar 19 17:10:53 crc kubenswrapper[4792]: I0319 17:10:53.830074 4792 generic.go:334] "Generic (PLEG): container finished" podID="a72bb0db-ba96-464e-84be-283010baf52c" containerID="aa5591dd145814d99d4e5532e8e11b5af69b7fd3bb1c2e38e4c9a0039ab20377" exitCode=2 Mar 19 17:10:53 crc kubenswrapper[4792]: I0319 17:10:53.830207 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a72bb0db-ba96-464e-84be-283010baf52c","Type":"ContainerDied","Data":"aa5591dd145814d99d4e5532e8e11b5af69b7fd3bb1c2e38e4c9a0039ab20377"} Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.065223 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.161741 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.193554 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r66vp\" (UniqueName: \"kubernetes.io/projected/a72bb0db-ba96-464e-84be-283010baf52c-kube-api-access-r66vp\") pod \"a72bb0db-ba96-464e-84be-283010baf52c\" (UID: \"a72bb0db-ba96-464e-84be-283010baf52c\") " Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.199077 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72bb0db-ba96-464e-84be-283010baf52c-kube-api-access-r66vp" (OuterVolumeSpecName: "kube-api-access-r66vp") pod "a72bb0db-ba96-464e-84be-283010baf52c" (UID: "a72bb0db-ba96-464e-84be-283010baf52c"). InnerVolumeSpecName "kube-api-access-r66vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.295712 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8rlc\" (UniqueName: \"kubernetes.io/projected/77cb387a-c012-4955-a0a9-272badd02d11-kube-api-access-v8rlc\") pod \"77cb387a-c012-4955-a0a9-272badd02d11\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.295797 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-combined-ca-bundle\") pod \"77cb387a-c012-4955-a0a9-272badd02d11\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.296100 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-config-data\") pod \"77cb387a-c012-4955-a0a9-272badd02d11\" (UID: \"77cb387a-c012-4955-a0a9-272badd02d11\") " Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.296864 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r66vp\" (UniqueName: \"kubernetes.io/projected/a72bb0db-ba96-464e-84be-283010baf52c-kube-api-access-r66vp\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.299191 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77cb387a-c012-4955-a0a9-272badd02d11-kube-api-access-v8rlc" (OuterVolumeSpecName: "kube-api-access-v8rlc") pod "77cb387a-c012-4955-a0a9-272badd02d11" (UID: "77cb387a-c012-4955-a0a9-272badd02d11"). InnerVolumeSpecName "kube-api-access-v8rlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.328765 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77cb387a-c012-4955-a0a9-272badd02d11" (UID: "77cb387a-c012-4955-a0a9-272badd02d11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.347258 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-config-data" (OuterVolumeSpecName: "config-data") pod "77cb387a-c012-4955-a0a9-272badd02d11" (UID: "77cb387a-c012-4955-a0a9-272badd02d11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.399368 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8rlc\" (UniqueName: \"kubernetes.io/projected/77cb387a-c012-4955-a0a9-272badd02d11-kube-api-access-v8rlc\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.399644 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.399705 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77cb387a-c012-4955-a0a9-272badd02d11-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.841387 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"77cb387a-c012-4955-a0a9-272badd02d11","Type":"ContainerDied","Data":"0f0d64b7dfad4935d417f50b1e85b3588fcbc084c6c71f4fe286e835e4cf1046"} Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.841405 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.841720 4792 scope.go:117] "RemoveContainer" containerID="98ab4eb98530907bac5570762736c74bf0f49059bddeb51536ade08609e12178" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.852585 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a72bb0db-ba96-464e-84be-283010baf52c","Type":"ContainerDied","Data":"baea52129fc06890008a67ba3f1e7c33c5a4f9dd8cc2c33de79bfd5dca032299"} Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.852678 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.884014 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.895850 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.902475 4792 scope.go:117] "RemoveContainer" containerID="aa5591dd145814d99d4e5532e8e11b5af69b7fd3bb1c2e38e4c9a0039ab20377" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.907566 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.920204 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.969476 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:10:54 crc kubenswrapper[4792]: E0319 17:10:54.970385 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72bb0db-ba96-464e-84be-283010baf52c" containerName="kube-state-metrics" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.970405 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72bb0db-ba96-464e-84be-283010baf52c" containerName="kube-state-metrics" Mar 19 17:10:54 crc kubenswrapper[4792]: E0319 17:10:54.970431 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cb387a-c012-4955-a0a9-272badd02d11" containerName="mysqld-exporter" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.970440 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cb387a-c012-4955-a0a9-272badd02d11" containerName="mysqld-exporter" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.970904 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72bb0db-ba96-464e-84be-283010baf52c" containerName="kube-state-metrics" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.970951 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="77cb387a-c012-4955-a0a9-272badd02d11" containerName="mysqld-exporter" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.972337 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.975552 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.976300 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.994708 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.996687 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 19 17:10:54 crc kubenswrapper[4792]: I0319 17:10:54.999283 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.000340 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.008060 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.022996 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.119621 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-config-data\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.119675 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.119704 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.119831 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.120022 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.120291 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhjvx\" (UniqueName: \"kubernetes.io/projected/eef31711-ec31-4b3c-b5b5-e27be14b85ef-kube-api-access-rhjvx\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.120349 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.120479 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthsw\" (UniqueName: \"kubernetes.io/projected/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-api-access-gthsw\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.222139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhjvx\" (UniqueName: \"kubernetes.io/projected/eef31711-ec31-4b3c-b5b5-e27be14b85ef-kube-api-access-rhjvx\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.222190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.222249 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthsw\" (UniqueName: \"kubernetes.io/projected/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-api-access-gthsw\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.222270 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-config-data\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.222291 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.222311 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.222385 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.222423 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.227475 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-config-data\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.227517 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.228399 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.231396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.231482 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.233628 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef31711-ec31-4b3c-b5b5-e27be14b85ef-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.244853 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhjvx\" (UniqueName: \"kubernetes.io/projected/eef31711-ec31-4b3c-b5b5-e27be14b85ef-kube-api-access-rhjvx\") pod \"mysqld-exporter-0\" (UID: \"eef31711-ec31-4b3c-b5b5-e27be14b85ef\") " pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.249510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthsw\" (UniqueName: \"kubernetes.io/projected/3c37ff21-a32e-4b93-9292-3648b8cc3a8e-kube-api-access-gthsw\") pod \"kube-state-metrics-0\" (UID: \"3c37ff21-a32e-4b93-9292-3648b8cc3a8e\") " pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.296897 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.318335 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.593164 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.593418 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="ceilometer-central-agent" containerID="cri-o://2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce" gracePeriod=30 Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.593597 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="proxy-httpd" containerID="cri-o://2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b" gracePeriod=30 Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.593638 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="sg-core" containerID="cri-o://2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9" gracePeriod=30 Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.593668 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="ceilometer-notification-agent" containerID="cri-o://7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86" gracePeriod=30 Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.754564 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77cb387a-c012-4955-a0a9-272badd02d11" path="/var/lib/kubelet/pods/77cb387a-c012-4955-a0a9-272badd02d11/volumes" Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.756243 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72bb0db-ba96-464e-84be-283010baf52c" path="/var/lib/kubelet/pods/a72bb0db-ba96-464e-84be-283010baf52c/volumes" Mar 19 17:10:55 crc kubenswrapper[4792]: W0319 17:10:55.800579 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef31711_ec31_4b3c_b5b5_e27be14b85ef.slice/crio-23f45b41246f13616828030f9d0240af5b70fff07955bef36a247d47b7df2c15 WatchSource:0}: Error finding container 23f45b41246f13616828030f9d0240af5b70fff07955bef36a247d47b7df2c15: Status 404 returned error can't find the container with id 23f45b41246f13616828030f9d0240af5b70fff07955bef36a247d47b7df2c15 Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.806092 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.811093 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.868236 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"eef31711-ec31-4b3c-b5b5-e27be14b85ef","Type":"ContainerStarted","Data":"23f45b41246f13616828030f9d0240af5b70fff07955bef36a247d47b7df2c15"} Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.871153 4792 generic.go:334] "Generic (PLEG): container finished" podID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerID="2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b" exitCode=0 Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.871247 4792 generic.go:334] "Generic (PLEG): container finished" podID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerID="2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9" exitCode=2 Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.871242 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerDied","Data":"2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b"} Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.871295 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerDied","Data":"2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9"} Mar 19 17:10:55 crc kubenswrapper[4792]: I0319 17:10:55.946616 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.079828 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.079894 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.741252 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:10:56 crc kubenswrapper[4792]: E0319 17:10:56.741814 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.894384 4792 generic.go:334] "Generic (PLEG): container finished" podID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerID="2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce" exitCode=0 Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.894430 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerDied","Data":"2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce"} Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.898543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"eef31711-ec31-4b3c-b5b5-e27be14b85ef","Type":"ContainerStarted","Data":"d682d3a0d835390143ba5f4774bd0d3e839c351d2a04dccde6f392a6d07371d5"} Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.901895 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c37ff21-a32e-4b93-9292-3648b8cc3a8e","Type":"ContainerStarted","Data":"6ceef3625c7c3a7044ddcefb75f3eb09190c69a3cbb6d17aa4d848b5974fdec9"} Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.901936 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c37ff21-a32e-4b93-9292-3648b8cc3a8e","Type":"ContainerStarted","Data":"b973a78dfe472850a90e700bd95448fb4192a7afb55e67fc58459fc216a8fb93"} Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.902089 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.918548 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.330766936 podStartE2EDuration="2.918524001s" podCreationTimestamp="2026-03-19 17:10:54 +0000 UTC" firstStartedPulling="2026-03-19 17:10:55.810443022 +0000 UTC m=+1818.956500562" lastFinishedPulling="2026-03-19 17:10:56.398200087 +0000 UTC m=+1819.544257627" observedRunningTime="2026-03-19 17:10:56.915562519 +0000 UTC m=+1820.061620059" watchObservedRunningTime="2026-03-19 17:10:56.918524001 +0000 UTC m=+1820.064581541" Mar 19 17:10:56 crc kubenswrapper[4792]: I0319 17:10:56.952209 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.5667261630000002 podStartE2EDuration="2.952184375s" podCreationTimestamp="2026-03-19 17:10:54 +0000 UTC" firstStartedPulling="2026-03-19 17:10:55.942181878 +0000 UTC m=+1819.088239418" lastFinishedPulling="2026-03-19 17:10:56.32764009 +0000 UTC m=+1819.473697630" observedRunningTime="2026-03-19 17:10:56.937144582 +0000 UTC m=+1820.083202112" watchObservedRunningTime="2026-03-19 17:10:56.952184375 +0000 UTC m=+1820.098241925" Mar 19 17:10:57 crc kubenswrapper[4792]: I0319 17:10:57.135766 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 17:10:57 crc kubenswrapper[4792]: I0319 17:10:57.135815 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.085113 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.087530 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.092150 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 17:10:58 crc kubenswrapper[4792]: E0319 17:10:58.125333 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda44e6522_9196_42bd_9162_88b9f03a0b21.slice/crio-7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86.scope\": RecentStats: unable to find data in memory cache]" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.555206 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.625018 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-config-data\") pod \"a44e6522-9196-42bd-9162-88b9f03a0b21\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.625324 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-log-httpd\") pod \"a44e6522-9196-42bd-9162-88b9f03a0b21\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.625352 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-run-httpd\") pod \"a44e6522-9196-42bd-9162-88b9f03a0b21\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.625401 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px6br\" (UniqueName: \"kubernetes.io/projected/a44e6522-9196-42bd-9162-88b9f03a0b21-kube-api-access-px6br\") pod \"a44e6522-9196-42bd-9162-88b9f03a0b21\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.625468 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-combined-ca-bundle\") pod \"a44e6522-9196-42bd-9162-88b9f03a0b21\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.625492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-sg-core-conf-yaml\") pod \"a44e6522-9196-42bd-9162-88b9f03a0b21\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.625517 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-scripts\") pod \"a44e6522-9196-42bd-9162-88b9f03a0b21\" (UID: \"a44e6522-9196-42bd-9162-88b9f03a0b21\") " Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.631330 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a44e6522-9196-42bd-9162-88b9f03a0b21" (UID: "a44e6522-9196-42bd-9162-88b9f03a0b21"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.632619 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44e6522-9196-42bd-9162-88b9f03a0b21-kube-api-access-px6br" (OuterVolumeSpecName: "kube-api-access-px6br") pod "a44e6522-9196-42bd-9162-88b9f03a0b21" (UID: "a44e6522-9196-42bd-9162-88b9f03a0b21"). InnerVolumeSpecName "kube-api-access-px6br". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.633065 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a44e6522-9196-42bd-9162-88b9f03a0b21" (UID: "a44e6522-9196-42bd-9162-88b9f03a0b21"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.655981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-scripts" (OuterVolumeSpecName: "scripts") pod "a44e6522-9196-42bd-9162-88b9f03a0b21" (UID: "a44e6522-9196-42bd-9162-88b9f03a0b21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.684535 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a44e6522-9196-42bd-9162-88b9f03a0b21" (UID: "a44e6522-9196-42bd-9162-88b9f03a0b21"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.727474 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a44e6522-9196-42bd-9162-88b9f03a0b21" (UID: "a44e6522-9196-42bd-9162-88b9f03a0b21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.728220 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.728242 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a44e6522-9196-42bd-9162-88b9f03a0b21-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.728253 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px6br\" (UniqueName: \"kubernetes.io/projected/a44e6522-9196-42bd-9162-88b9f03a0b21-kube-api-access-px6br\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.728263 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.728273 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.728282 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.776015 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-config-data" (OuterVolumeSpecName: "config-data") pod "a44e6522-9196-42bd-9162-88b9f03a0b21" (UID: "a44e6522-9196-42bd-9162-88b9f03a0b21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.831052 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44e6522-9196-42bd-9162-88b9f03a0b21-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.962801 4792 generic.go:334] "Generic (PLEG): container finished" podID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerID="7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86" exitCode=0 Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.962891 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.962947 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerDied","Data":"7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86"} Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.962997 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a44e6522-9196-42bd-9162-88b9f03a0b21","Type":"ContainerDied","Data":"1cc60d6cd6fb5f83b7bace3c3757a8f832fbd6c7add87b98aa2e870c24753e15"} Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.963018 4792 scope.go:117] "RemoveContainer" containerID="2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b" Mar 19 17:10:58 crc kubenswrapper[4792]: I0319 17:10:58.969206 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.024169 4792 scope.go:117] "RemoveContainer" containerID="2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.035789 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.052013 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.057149 4792 scope.go:117] "RemoveContainer" containerID="7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.075788 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:59 crc kubenswrapper[4792]: E0319 17:10:59.076315 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="ceilometer-notification-agent" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.076328 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="ceilometer-notification-agent" Mar 19 17:10:59 crc kubenswrapper[4792]: E0319 17:10:59.076369 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="ceilometer-central-agent" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.076376 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="ceilometer-central-agent" Mar 19 17:10:59 crc kubenswrapper[4792]: E0319 17:10:59.076387 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="proxy-httpd" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.076402 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="proxy-httpd" Mar 19 17:10:59 crc kubenswrapper[4792]: E0319 17:10:59.076421 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="sg-core" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.076426 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="sg-core" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.076638 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="sg-core" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.076665 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="ceilometer-notification-agent" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.076679 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="proxy-httpd" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.076694 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" containerName="ceilometer-central-agent" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.078770 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.087500 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.087820 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.087959 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.092095 4792 scope.go:117] "RemoveContainer" containerID="2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.092106 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.118051 4792 scope.go:117] "RemoveContainer" containerID="2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b" Mar 19 17:10:59 crc kubenswrapper[4792]: E0319 17:10:59.119090 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b\": container with ID starting with 2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b not found: ID does not exist" containerID="2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.119209 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b"} err="failed to get container status \"2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b\": rpc error: code = NotFound desc = could not find container \"2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b\": container with ID starting with 2620d307c09a81879a2b15d5c9f698d375137a2133976a0a80f9e814037bf28b not found: ID does not exist" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.119300 4792 scope.go:117] "RemoveContainer" containerID="2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9" Mar 19 17:10:59 crc kubenswrapper[4792]: E0319 17:10:59.120123 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9\": container with ID starting with 2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9 not found: ID does not exist" containerID="2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.120157 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9"} err="failed to get container status \"2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9\": rpc error: code = NotFound desc = could not find container \"2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9\": container with ID starting with 2c354d6f28a07a2124645649539a290c9f8e3aed0bf7144585b83aeaeed869e9 not found: ID does not exist" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.120181 4792 scope.go:117] "RemoveContainer" containerID="7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86" Mar 19 17:10:59 crc kubenswrapper[4792]: E0319 17:10:59.121510 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86\": container with ID starting with 7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86 not found: ID does not exist" containerID="7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.121554 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86"} err="failed to get container status \"7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86\": rpc error: code = NotFound desc = could not find container \"7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86\": container with ID starting with 7657e17a2d9559f692d553587257310dc73944f1d52146d14fb02a0479f74f86 not found: ID does not exist" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.121582 4792 scope.go:117] "RemoveContainer" containerID="2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce" Mar 19 17:10:59 crc kubenswrapper[4792]: E0319 17:10:59.121817 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce\": container with ID starting with 2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce not found: ID does not exist" containerID="2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.121832 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce"} err="failed to get container status \"2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce\": rpc error: code = NotFound desc = could not find container \"2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce\": container with ID starting with 2c77db0f49a43451549e61b72e96bd5c391b2983a3d301f33a5b65d37dd949ce not found: ID does not exist" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.139387 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.139447 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.139470 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-scripts\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.139488 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bz4\" (UniqueName: \"kubernetes.io/projected/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-kube-api-access-79bz4\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.139591 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-run-httpd\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.139645 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.139695 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-log-httpd\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.139731 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-config-data\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.148957 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.152021 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.157925 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.241229 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-log-httpd\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.241298 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-config-data\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.241346 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.241385 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.241406 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-scripts\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.241421 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bz4\" (UniqueName: \"kubernetes.io/projected/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-kube-api-access-79bz4\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.241528 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-run-httpd\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.241576 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.242865 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-log-httpd\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.243087 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-run-httpd\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.249417 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.249499 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.249545 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.249766 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-config-data\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.250048 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-scripts\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.257456 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bz4\" (UniqueName: \"kubernetes.io/projected/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-kube-api-access-79bz4\") pod \"ceilometer-0\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.407025 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.763561 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a44e6522-9196-42bd-9162-88b9f03a0b21" path="/var/lib/kubelet/pods/a44e6522-9196-42bd-9162-88b9f03a0b21/volumes" Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.880911 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:10:59 crc kubenswrapper[4792]: W0319 17:10:59.891457 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27eb4ece_84fb_4cda_8ca4_23b21ae220c7.slice/crio-d9d63cccc82099fa1e7a93d74dbfa4d57a66582b8ccd147a2dc055499007a44c WatchSource:0}: Error finding container d9d63cccc82099fa1e7a93d74dbfa4d57a66582b8ccd147a2dc055499007a44c: Status 404 returned error can't find the container with id d9d63cccc82099fa1e7a93d74dbfa4d57a66582b8ccd147a2dc055499007a44c Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.983144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerStarted","Data":"d9d63cccc82099fa1e7a93d74dbfa4d57a66582b8ccd147a2dc055499007a44c"} Mar 19 17:10:59 crc kubenswrapper[4792]: I0319 17:10:59.995718 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 17:11:01 crc kubenswrapper[4792]: I0319 17:11:01.006226 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerStarted","Data":"a70e34a64e77c26bc10906668c851fa897f06ca4e914e40fcb48e26741e1e867"} Mar 19 17:11:02 crc kubenswrapper[4792]: I0319 17:11:02.021825 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerStarted","Data":"c08640fc26c6f95d0e0d5f61c4175f8fcae81f12cd237078ea274801399936f3"} Mar 19 17:11:03 crc kubenswrapper[4792]: I0319 17:11:03.099080 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerStarted","Data":"bcbfbd66732619cbee26416c7486360c863bdb1111fa2f4b6f1e8c8f0d443725"} Mar 19 17:11:05 crc kubenswrapper[4792]: I0319 17:11:05.122375 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerStarted","Data":"dd71861af46a632e6bd23750343b93da937b2263465c7c3dafc2bdc3db6b3c25"} Mar 19 17:11:05 crc kubenswrapper[4792]: I0319 17:11:05.122967 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:11:05 crc kubenswrapper[4792]: I0319 17:11:05.157469 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.804893899 podStartE2EDuration="6.157446333s" podCreationTimestamp="2026-03-19 17:10:59 +0000 UTC" firstStartedPulling="2026-03-19 17:10:59.894085715 +0000 UTC m=+1823.040143255" lastFinishedPulling="2026-03-19 17:11:04.246638149 +0000 UTC m=+1827.392695689" observedRunningTime="2026-03-19 17:11:05.142192774 +0000 UTC m=+1828.288250314" watchObservedRunningTime="2026-03-19 17:11:05.157446333 +0000 UTC m=+1828.303503873" Mar 19 17:11:05 crc kubenswrapper[4792]: I0319 17:11:05.328564 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 19 17:11:07 crc kubenswrapper[4792]: I0319 17:11:07.755298 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:11:07 crc kubenswrapper[4792]: E0319 17:11:07.756185 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:11:22 crc kubenswrapper[4792]: I0319 17:11:22.739705 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:11:22 crc kubenswrapper[4792]: E0319 17:11:22.740575 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:11:29 crc kubenswrapper[4792]: I0319 17:11:29.426639 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 19 17:11:34 crc kubenswrapper[4792]: I0319 17:11:34.740877 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:11:34 crc kubenswrapper[4792]: E0319 17:11:34.742199 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:11:40 crc kubenswrapper[4792]: I0319 17:11:40.763713 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-r6f9z"] Mar 19 17:11:40 crc kubenswrapper[4792]: I0319 17:11:40.774783 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-r6f9z"] Mar 19 17:11:40 crc kubenswrapper[4792]: I0319 17:11:40.864144 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-6dwwv"] Mar 19 17:11:40 crc kubenswrapper[4792]: I0319 17:11:40.865737 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:40 crc kubenswrapper[4792]: I0319 17:11:40.888287 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6dwwv"] Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.035475 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-combined-ca-bundle\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.035751 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qbdz\" (UniqueName: \"kubernetes.io/projected/a5ee639b-34bf-4824-902d-e38af5ad4527-kube-api-access-8qbdz\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.035904 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-config-data\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.138234 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qbdz\" (UniqueName: \"kubernetes.io/projected/a5ee639b-34bf-4824-902d-e38af5ad4527-kube-api-access-8qbdz\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.138370 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-config-data\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.138442 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-combined-ca-bundle\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.148083 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-combined-ca-bundle\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.150503 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-config-data\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.158418 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qbdz\" (UniqueName: \"kubernetes.io/projected/a5ee639b-34bf-4824-902d-e38af5ad4527-kube-api-access-8qbdz\") pod \"heat-db-sync-6dwwv\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.214000 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6dwwv" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.761835 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdaaa799-71ff-429b-86fe-bbe4e903984f" path="/var/lib/kubelet/pods/cdaaa799-71ff-429b-86fe-bbe4e903984f/volumes" Mar 19 17:11:41 crc kubenswrapper[4792]: I0319 17:11:41.764531 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6dwwv"] Mar 19 17:11:42 crc kubenswrapper[4792]: I0319 17:11:42.538880 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6dwwv" event={"ID":"a5ee639b-34bf-4824-902d-e38af5ad4527","Type":"ContainerStarted","Data":"25081d3fdae32978d62c029c1fcd73959e1f37851d5e2d5d2472bb4e7ea4501d"} Mar 19 17:11:42 crc kubenswrapper[4792]: I0319 17:11:42.978347 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:11:43 crc kubenswrapper[4792]: I0319 17:11:43.468412 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:11:43 crc kubenswrapper[4792]: I0319 17:11:43.469439 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="sg-core" containerID="cri-o://bcbfbd66732619cbee26416c7486360c863bdb1111fa2f4b6f1e8c8f0d443725" gracePeriod=30 Mar 19 17:11:43 crc kubenswrapper[4792]: I0319 17:11:43.469660 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="ceilometer-notification-agent" containerID="cri-o://c08640fc26c6f95d0e0d5f61c4175f8fcae81f12cd237078ea274801399936f3" gracePeriod=30 Mar 19 17:11:43 crc kubenswrapper[4792]: I0319 17:11:43.469689 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="proxy-httpd" containerID="cri-o://dd71861af46a632e6bd23750343b93da937b2263465c7c3dafc2bdc3db6b3c25" gracePeriod=30 Mar 19 17:11:43 crc kubenswrapper[4792]: I0319 17:11:43.469327 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="ceilometer-central-agent" containerID="cri-o://a70e34a64e77c26bc10906668c851fa897f06ca4e914e40fcb48e26741e1e867" gracePeriod=30 Mar 19 17:11:43 crc kubenswrapper[4792]: I0319 17:11:43.921926 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:11:44 crc kubenswrapper[4792]: I0319 17:11:44.564026 4792 generic.go:334] "Generic (PLEG): container finished" podID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerID="dd71861af46a632e6bd23750343b93da937b2263465c7c3dafc2bdc3db6b3c25" exitCode=0 Mar 19 17:11:44 crc kubenswrapper[4792]: I0319 17:11:44.564074 4792 generic.go:334] "Generic (PLEG): container finished" podID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerID="bcbfbd66732619cbee26416c7486360c863bdb1111fa2f4b6f1e8c8f0d443725" exitCode=2 Mar 19 17:11:44 crc kubenswrapper[4792]: I0319 17:11:44.564085 4792 generic.go:334] "Generic (PLEG): container finished" podID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerID="a70e34a64e77c26bc10906668c851fa897f06ca4e914e40fcb48e26741e1e867" exitCode=0 Mar 19 17:11:44 crc kubenswrapper[4792]: I0319 17:11:44.564106 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerDied","Data":"dd71861af46a632e6bd23750343b93da937b2263465c7c3dafc2bdc3db6b3c25"} Mar 19 17:11:44 crc kubenswrapper[4792]: I0319 17:11:44.564148 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerDied","Data":"bcbfbd66732619cbee26416c7486360c863bdb1111fa2f4b6f1e8c8f0d443725"} Mar 19 17:11:44 crc kubenswrapper[4792]: I0319 17:11:44.564160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerDied","Data":"a70e34a64e77c26bc10906668c851fa897f06ca4e914e40fcb48e26741e1e867"} Mar 19 17:11:47 crc kubenswrapper[4792]: I0319 17:11:47.753507 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:11:47 crc kubenswrapper[4792]: E0319 17:11:47.754531 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:11:47 crc kubenswrapper[4792]: I0319 17:11:47.975470 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="rabbitmq" containerID="cri-o://e8db222d890264fc685f1a1f921fae4f10cc91b5bca90eb6aab73ed3f1e1b91c" gracePeriod=604796 Mar 19 17:11:49 crc kubenswrapper[4792]: I0319 17:11:49.648101 4792 generic.go:334] "Generic (PLEG): container finished" podID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerID="c08640fc26c6f95d0e0d5f61c4175f8fcae81f12cd237078ea274801399936f3" exitCode=0 Mar 19 17:11:49 crc kubenswrapper[4792]: I0319 17:11:49.648183 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerDied","Data":"c08640fc26c6f95d0e0d5f61c4175f8fcae81f12cd237078ea274801399936f3"} Mar 19 17:11:49 crc kubenswrapper[4792]: I0319 17:11:49.759190 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="rabbitmq" containerID="cri-o://6e6db9b8741f1d33191e512ae255863a39fcf3e2a5412c3bddb2247e63fca59a" gracePeriod=604795 Mar 19 17:11:50 crc kubenswrapper[4792]: I0319 17:11:50.938439 4792 scope.go:117] "RemoveContainer" containerID="ada9040d47fab10e3e29ed4fa5620eabc6bd3429768e1cff41bfe3f6feb55372" Mar 19 17:11:51 crc kubenswrapper[4792]: I0319 17:11:51.766561 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Mar 19 17:11:52 crc kubenswrapper[4792]: I0319 17:11:52.251618 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Mar 19 17:11:54 crc kubenswrapper[4792]: I0319 17:11:54.775642 4792 generic.go:334] "Generic (PLEG): container finished" podID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerID="e8db222d890264fc685f1a1f921fae4f10cc91b5bca90eb6aab73ed3f1e1b91c" exitCode=0 Mar 19 17:11:54 crc kubenswrapper[4792]: I0319 17:11:54.775721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2","Type":"ContainerDied","Data":"e8db222d890264fc685f1a1f921fae4f10cc91b5bca90eb6aab73ed3f1e1b91c"} Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.321163 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.400886 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-run-httpd\") pod \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.401036 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bz4\" (UniqueName: \"kubernetes.io/projected/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-kube-api-access-79bz4\") pod \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.401130 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-combined-ca-bundle\") pod \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.401154 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-scripts\") pod \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.401230 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-ceilometer-tls-certs\") pod \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.401310 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-log-httpd\") pod \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.401380 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-sg-core-conf-yaml\") pod \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.401484 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-config-data\") pod \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\" (UID: \"27eb4ece-84fb-4cda-8ca4-23b21ae220c7\") " Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.403512 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27eb4ece-84fb-4cda-8ca4-23b21ae220c7" (UID: "27eb4ece-84fb-4cda-8ca4-23b21ae220c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.408641 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-scripts" (OuterVolumeSpecName: "scripts") pod "27eb4ece-84fb-4cda-8ca4-23b21ae220c7" (UID: "27eb4ece-84fb-4cda-8ca4-23b21ae220c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.411628 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27eb4ece-84fb-4cda-8ca4-23b21ae220c7" (UID: "27eb4ece-84fb-4cda-8ca4-23b21ae220c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.413360 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-kube-api-access-79bz4" (OuterVolumeSpecName: "kube-api-access-79bz4") pod "27eb4ece-84fb-4cda-8ca4-23b21ae220c7" (UID: "27eb4ece-84fb-4cda-8ca4-23b21ae220c7"). InnerVolumeSpecName "kube-api-access-79bz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.458470 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27eb4ece-84fb-4cda-8ca4-23b21ae220c7" (UID: "27eb4ece-84fb-4cda-8ca4-23b21ae220c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.505428 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.505467 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79bz4\" (UniqueName: \"kubernetes.io/projected/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-kube-api-access-79bz4\") on node \"crc\" DevicePath \"\"" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.505482 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.505492 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.505503 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.514385 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "27eb4ece-84fb-4cda-8ca4-23b21ae220c7" (UID: "27eb4ece-84fb-4cda-8ca4-23b21ae220c7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.608674 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.612015 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27eb4ece-84fb-4cda-8ca4-23b21ae220c7" (UID: "27eb4ece-84fb-4cda-8ca4-23b21ae220c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.628996 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-config-data" (OuterVolumeSpecName: "config-data") pod "27eb4ece-84fb-4cda-8ca4-23b21ae220c7" (UID: "27eb4ece-84fb-4cda-8ca4-23b21ae220c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.710541 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.710569 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27eb4ece-84fb-4cda-8ca4-23b21ae220c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.791184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27eb4ece-84fb-4cda-8ca4-23b21ae220c7","Type":"ContainerDied","Data":"d9d63cccc82099fa1e7a93d74dbfa4d57a66582b8ccd147a2dc055499007a44c"} Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.791242 4792 scope.go:117] "RemoveContainer" containerID="dd71861af46a632e6bd23750343b93da937b2263465c7c3dafc2bdc3db6b3c25" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.792252 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.825606 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.845211 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.856757 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:11:55 crc kubenswrapper[4792]: E0319 17:11:55.857293 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="ceilometer-central-agent" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.857314 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="ceilometer-central-agent" Mar 19 17:11:55 crc kubenswrapper[4792]: E0319 17:11:55.857325 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="sg-core" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.857334 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="sg-core" Mar 19 17:11:55 crc kubenswrapper[4792]: E0319 17:11:55.857345 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="ceilometer-notification-agent" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.857353 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="ceilometer-notification-agent" Mar 19 17:11:55 crc kubenswrapper[4792]: E0319 17:11:55.857365 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="proxy-httpd" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.857370 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="proxy-httpd" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.857633 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="sg-core" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.857654 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="ceilometer-central-agent" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.857669 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="proxy-httpd" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.857686 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" containerName="ceilometer-notification-agent" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.863969 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.867068 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.868978 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.872132 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.872567 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.915136 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.915199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b1938b-461b-46fe-9fb9-28e17c7591bc-log-httpd\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.915281 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.915317 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x44k\" (UniqueName: \"kubernetes.io/projected/05b1938b-461b-46fe-9fb9-28e17c7591bc-kube-api-access-4x44k\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.915413 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-scripts\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.915487 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b1938b-461b-46fe-9fb9-28e17c7591bc-run-httpd\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.915538 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:55 crc kubenswrapper[4792]: I0319 17:11:55.915584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-config-data\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017075 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-config-data\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017194 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b1938b-461b-46fe-9fb9-28e17c7591bc-log-httpd\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017236 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017280 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x44k\" (UniqueName: \"kubernetes.io/projected/05b1938b-461b-46fe-9fb9-28e17c7591bc-kube-api-access-4x44k\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017332 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-scripts\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017390 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b1938b-461b-46fe-9fb9-28e17c7591bc-run-httpd\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017434 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017757 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b1938b-461b-46fe-9fb9-28e17c7591bc-run-httpd\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.017753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b1938b-461b-46fe-9fb9-28e17c7591bc-log-httpd\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.021097 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-scripts\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.021161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.021180 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.021761 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.022700 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b1938b-461b-46fe-9fb9-28e17c7591bc-config-data\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.039042 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x44k\" (UniqueName: \"kubernetes.io/projected/05b1938b-461b-46fe-9fb9-28e17c7591bc-kube-api-access-4x44k\") pod \"ceilometer-0\" (UID: \"05b1938b-461b-46fe-9fb9-28e17c7591bc\") " pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.192695 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.846477 4792 generic.go:334] "Generic (PLEG): container finished" podID="886bf823-6964-4a71-807d-2b448201fc5e" containerID="6e6db9b8741f1d33191e512ae255863a39fcf3e2a5412c3bddb2247e63fca59a" exitCode=0 Mar 19 17:11:56 crc kubenswrapper[4792]: I0319 17:11:56.846862 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"886bf823-6964-4a71-807d-2b448201fc5e","Type":"ContainerDied","Data":"6e6db9b8741f1d33191e512ae255863a39fcf3e2a5412c3bddb2247e63fca59a"} Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.254546 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-phbpx"] Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.260145 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.262916 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.267907 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-phbpx"] Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.355084 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.355171 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f7rv\" (UniqueName: \"kubernetes.io/projected/ee0d117f-a568-4077-a594-bcba45f1188c-kube-api-access-6f7rv\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.355229 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-config\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.355289 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.355325 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.355373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.355511 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-svc\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.458751 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.459208 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.459278 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.459480 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-svc\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.460106 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.460335 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.460374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.460810 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-svc\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.461099 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.461204 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f7rv\" (UniqueName: \"kubernetes.io/projected/ee0d117f-a568-4077-a594-bcba45f1188c-kube-api-access-6f7rv\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.461308 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-config\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.464693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.467307 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-config\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.479699 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f7rv\" (UniqueName: \"kubernetes.io/projected/ee0d117f-a568-4077-a594-bcba45f1188c-kube-api-access-6f7rv\") pod \"dnsmasq-dns-594cb89c79-phbpx\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.632648 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:11:57 crc kubenswrapper[4792]: I0319 17:11:57.757613 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27eb4ece-84fb-4cda-8ca4-23b21ae220c7" path="/var/lib/kubelet/pods/27eb4ece-84fb-4cda-8ca4-23b21ae220c7/volumes" Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.164677 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565672-7gqn5"] Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.166528 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.172488 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.172682 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.172782 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.186095 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwjh\" (UniqueName: \"kubernetes.io/projected/4d54b5d3-d6b5-428c-9e78-ab45a7af529b-kube-api-access-zkwjh\") pod \"auto-csr-approver-29565672-7gqn5\" (UID: \"4d54b5d3-d6b5-428c-9e78-ab45a7af529b\") " pod="openshift-infra/auto-csr-approver-29565672-7gqn5" Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.202884 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565672-7gqn5"] Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.288452 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwjh\" (UniqueName: \"kubernetes.io/projected/4d54b5d3-d6b5-428c-9e78-ab45a7af529b-kube-api-access-zkwjh\") pod \"auto-csr-approver-29565672-7gqn5\" (UID: \"4d54b5d3-d6b5-428c-9e78-ab45a7af529b\") " pod="openshift-infra/auto-csr-approver-29565672-7gqn5" Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.308854 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwjh\" (UniqueName: \"kubernetes.io/projected/4d54b5d3-d6b5-428c-9e78-ab45a7af529b-kube-api-access-zkwjh\") pod \"auto-csr-approver-29565672-7gqn5\" (UID: \"4d54b5d3-d6b5-428c-9e78-ab45a7af529b\") " pod="openshift-infra/auto-csr-approver-29565672-7gqn5" Mar 19 17:12:00 crc kubenswrapper[4792]: I0319 17:12:00.498629 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" Mar 19 17:12:01 crc kubenswrapper[4792]: I0319 17:12:01.747219 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:12:01 crc kubenswrapper[4792]: E0319 17:12:01.750489 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:12:02 crc kubenswrapper[4792]: E0319 17:12:02.266170 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 19 17:12:02 crc kubenswrapper[4792]: E0319 17:12:02.266241 4792 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 19 17:12:02 crc kubenswrapper[4792]: E0319 17:12:02.266358 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qbdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-6dwwv_openstack(a5ee639b-34bf-4824-902d-e38af5ad4527): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 17:12:02 crc kubenswrapper[4792]: E0319 17:12:02.269986 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-6dwwv" podUID="a5ee639b-34bf-4824-902d-e38af5ad4527" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.279724 4792 scope.go:117] "RemoveContainer" containerID="bcbfbd66732619cbee26416c7486360c863bdb1111fa2f4b6f1e8c8f0d443725" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.564473 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.591787 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.619173 4792 scope.go:117] "RemoveContainer" containerID="c08640fc26c6f95d0e0d5f61c4175f8fcae81f12cd237078ea274801399936f3" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.667497 4792 scope.go:117] "RemoveContainer" containerID="a70e34a64e77c26bc10906668c851fa897f06ca4e914e40fcb48e26741e1e867" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.690272 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886bf823-6964-4a71-807d-2b448201fc5e-erlang-cookie-secret\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.690330 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxvwg\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-kube-api-access-xxvwg\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.690417 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-plugins\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.690444 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-tls\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.691444 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.692038 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.692097 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-erlang-cookie\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.692130 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-server-conf\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.692679 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.692720 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-plugins\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.692780 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-plugins-conf\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.692832 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-config-data\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.692961 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-server-conf\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693007 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-tls\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693111 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-pod-info\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693206 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-confd\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693246 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-erlang-cookie-secret\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693286 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-erlang-cookie\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693315 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-config-data\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693340 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-confd\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693374 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-plugins-conf\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693392 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886bf823-6964-4a71-807d-2b448201fc5e-pod-info\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693426 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dst7m\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-kube-api-access-dst7m\") pod \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\" (UID: \"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2\") " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.693588 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.694122 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.694625 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.703470 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.699628 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-kube-api-access-xxvwg" (OuterVolumeSpecName: "kube-api-access-xxvwg") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "kube-api-access-xxvwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.703660 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.700133 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.700130 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.700330 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.710670 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886bf823-6964-4a71-807d-2b448201fc5e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.711951 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-kube-api-access-dst7m" (OuterVolumeSpecName: "kube-api-access-dst7m") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "kube-api-access-dst7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.711945 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.712523 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.723363 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-pod-info" (OuterVolumeSpecName: "pod-info") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.723717 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.757341 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/886bf823-6964-4a71-807d-2b448201fc5e-pod-info" (OuterVolumeSpecName: "pod-info") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.788885 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-config-data" (OuterVolumeSpecName: "config-data") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806020 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886bf823-6964-4a71-807d-2b448201fc5e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806069 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxvwg\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-kube-api-access-xxvwg\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806082 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806091 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806099 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806109 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-pod-info\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806219 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806230 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806238 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806246 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806254 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886bf823-6964-4a71-807d-2b448201fc5e-pod-info\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.806262 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dst7m\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-kube-api-access-dst7m\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.857518 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-config-data" (OuterVolumeSpecName: "config-data") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.867497 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61" (OuterVolumeSpecName: "persistence") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.892058 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-server-conf" (OuterVolumeSpecName: "server-conf") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: E0319 17:12:02.896589 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09 podName:886bf823-6964-4a71-807d-2b448201fc5e nodeName:}" failed. No retries permitted until 2026-03-19 17:12:03.396563915 +0000 UTC m=+1886.542621455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.904798 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-server-conf" (OuterVolumeSpecName: "server-conf") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.909156 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-server-conf\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.909721 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") on node \"crc\" " Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.909816 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.909913 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886bf823-6964-4a71-807d-2b448201fc5e-server-conf\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.958625 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.958925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"886bf823-6964-4a71-807d-2b448201fc5e","Type":"ContainerDied","Data":"729951d1a403c1a25657d9fc18344d0287e0f41ca4e13ee6d216c099ffc93f66"} Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.959003 4792 scope.go:117] "RemoveContainer" containerID="6e6db9b8741f1d33191e512ae255863a39fcf3e2a5412c3bddb2247e63fca59a" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.963055 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.963073 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.963541 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8d58d025-e325-4ac1-8bf8-b251ea8ed3f2","Type":"ContainerDied","Data":"999c30930a1dffe285667e1dc1777106cf7ce3530556817832f3ec9970b657d6"} Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.964066 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61") on node "crc" Mar 19 17:12:02 crc kubenswrapper[4792]: I0319 17:12:02.970075 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:02 crc kubenswrapper[4792]: E0319 17:12:02.971452 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-6dwwv" podUID="a5ee639b-34bf-4824-902d-e38af5ad4527" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.014063 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886bf823-6964-4a71-807d-2b448201fc5e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.014097 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.022338 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" (UID: "8d58d025-e325-4ac1-8bf8-b251ea8ed3f2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.119335 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.135695 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-phbpx"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.153204 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.230435 4792 scope.go:117] "RemoveContainer" containerID="bc1b64f0e6128b699c99dc8dcb63e408b32a3cd1bb88f233cb5b2f619cde4569" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.312487 4792 scope.go:117] "RemoveContainer" containerID="e8db222d890264fc685f1a1f921fae4f10cc91b5bca90eb6aab73ed3f1e1b91c" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.430259 4792 scope.go:117] "RemoveContainer" containerID="eb4a7be4f50be7354e01d506a44dffdf85cda1ccc2a413dfca36b1e0196c8715" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.436917 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"886bf823-6964-4a71-807d-2b448201fc5e\" (UID: \"886bf823-6964-4a71-807d-2b448201fc5e\") " Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.500898 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.564199 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.587925 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:12:03 crc kubenswrapper[4792]: E0319 17:12:03.588572 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="rabbitmq" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.588595 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="rabbitmq" Mar 19 17:12:03 crc kubenswrapper[4792]: E0319 17:12:03.588616 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="setup-container" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.588623 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="setup-container" Mar 19 17:12:03 crc kubenswrapper[4792]: E0319 17:12:03.588663 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="setup-container" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.588672 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="setup-container" Mar 19 17:12:03 crc kubenswrapper[4792]: E0319 17:12:03.588691 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="rabbitmq" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.588698 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="rabbitmq" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.588999 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="rabbitmq" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.589034 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="rabbitmq" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.590643 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.602008 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565672-7gqn5"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.618725 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.621763 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09" (OuterVolumeSpecName: "persistence") pod "886bf823-6964-4a71-807d-2b448201fc5e" (UID: "886bf823-6964-4a71-807d-2b448201fc5e"). InnerVolumeSpecName "pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.648893 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.648955 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae048e02-6ff7-4fa8-81c0-57ab3c051662-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.648986 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae048e02-6ff7-4fa8-81c0-57ab3c051662-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.653301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.653430 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6bcf\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-kube-api-access-n6bcf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.653556 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.653596 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.653622 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-config-data\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.653688 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.653858 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.653930 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.654085 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") on node \"crc\" " Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.710831 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.710997 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09") on node "crc" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.776747 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777164 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae048e02-6ff7-4fa8-81c0-57ab3c051662-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777196 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae048e02-6ff7-4fa8-81c0-57ab3c051662-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777323 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777392 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6bcf\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-kube-api-access-n6bcf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777455 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777474 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777498 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-config-data\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777545 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777657 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.777713 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.778046 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.782355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.783622 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.784584 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.784923 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.788792 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.788854 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/63576ad5fa42431418a875a556f725540f55ae4f6468824ed68600c688720c80/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.789212 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.792721 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.798542 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae048e02-6ff7-4fa8-81c0-57ab3c051662-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.799191 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae048e02-6ff7-4fa8-81c0-57ab3c051662-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.810254 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6bcf\" (UniqueName: \"kubernetes.io/projected/ae048e02-6ff7-4fa8-81c0-57ab3c051662-kube-api-access-n6bcf\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.816552 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae048e02-6ff7-4fa8-81c0-57ab3c051662-config-data\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.835856 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" path="/var/lib/kubelet/pods/8d58d025-e325-4ac1-8bf8-b251ea8ed3f2/volumes" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.836823 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.836871 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.836891 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.857176 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.857470 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.862604 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.862786 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.862927 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.862960 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.863963 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gqblj" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.864365 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.864629 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.993549 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd0da7d5-189f-4239-9015-6d161a4e7d61\") pod \"rabbitmq-server-2\" (UID: \"ae048e02-6ff7-4fa8-81c0-57ab3c051662\") " pod="openstack/rabbitmq-server-2" Mar 19 17:12:03 crc kubenswrapper[4792]: I0319 17:12:03.998096 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b1938b-461b-46fe-9fb9-28e17c7591bc","Type":"ContainerStarted","Data":"f89f46af9d2e9e0fcc3ece47ff3f2454613a8d5fe987f865169db63574b02bb5"} Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.002193 4792 generic.go:334] "Generic (PLEG): container finished" podID="ee0d117f-a568-4077-a594-bcba45f1188c" containerID="ec875f6218ae8647de83c04f5c266598b2ac2431642b24390150f7253e2cc997" exitCode=0 Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.002271 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" event={"ID":"ee0d117f-a568-4077-a594-bcba45f1188c","Type":"ContainerDied","Data":"ec875f6218ae8647de83c04f5c266598b2ac2431642b24390150f7253e2cc997"} Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.002304 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" event={"ID":"ee0d117f-a568-4077-a594-bcba45f1188c","Type":"ContainerStarted","Data":"c395318a98aa0ce1e1f43d8304799a588011f9ddda1aecc470506d450bdd318c"} Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.010141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" event={"ID":"4d54b5d3-d6b5-428c-9e78-ab45a7af529b","Type":"ContainerStarted","Data":"d518723cb795c6f8d2b067bf9cc550fbf34540eea877bbeaa9786c8d8653e46f"} Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.051168 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.051462 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.051789 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.051865 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c585782f-9e4f-4495-9e68-a10aa5fc90b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.052044 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt2w9\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-kube-api-access-xt2w9\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.052256 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.052326 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.052488 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c585782f-9e4f-4495-9e68-a10aa5fc90b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.052564 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.052686 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.052736 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.155682 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156241 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156274 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c585782f-9e4f-4495-9e68-a10aa5fc90b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156306 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt2w9\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-kube-api-access-xt2w9\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156375 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156420 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156516 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c585782f-9e4f-4495-9e68-a10aa5fc90b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156629 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156696 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156749 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.156939 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.157216 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.157890 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.159247 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.161318 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c585782f-9e4f-4495-9e68-a10aa5fc90b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.163534 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.163643 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76311b204d5b95e55d82801e47aa6cbf79945f7bf65419c3aa5650d333431014/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.165051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c585782f-9e4f-4495-9e68-a10aa5fc90b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.167078 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.170140 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.170800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c585782f-9e4f-4495-9e68-a10aa5fc90b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.181543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt2w9\" (UniqueName: \"kubernetes.io/projected/c585782f-9e4f-4495-9e68-a10aa5fc90b0-kube-api-access-xt2w9\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.235934 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eeaa2e15-c6e3-4d0e-9008-7c4faf48bc09\") pod \"rabbitmq-cell1-server-0\" (UID: \"c585782f-9e4f-4495-9e68-a10aa5fc90b0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.254961 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.511549 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:04 crc kubenswrapper[4792]: I0319 17:12:04.833105 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 19 17:12:04 crc kubenswrapper[4792]: W0319 17:12:04.868086 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae048e02_6ff7_4fa8_81c0_57ab3c051662.slice/crio-7eeca55c42d9974d3ca9ddad3587ad5643527627b66d776497a67e225084d052 WatchSource:0}: Error finding container 7eeca55c42d9974d3ca9ddad3587ad5643527627b66d776497a67e225084d052: Status 404 returned error can't find the container with id 7eeca55c42d9974d3ca9ddad3587ad5643527627b66d776497a67e225084d052 Mar 19 17:12:05 crc kubenswrapper[4792]: I0319 17:12:05.061102 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" event={"ID":"ee0d117f-a568-4077-a594-bcba45f1188c","Type":"ContainerStarted","Data":"9fd6d7b69a40542fd9645fa8183e37c233b95f46a7d600f117e2e0f56f21b4a0"} Mar 19 17:12:05 crc kubenswrapper[4792]: I0319 17:12:05.061412 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:12:05 crc kubenswrapper[4792]: I0319 17:12:05.069966 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ae048e02-6ff7-4fa8-81c0-57ab3c051662","Type":"ContainerStarted","Data":"7eeca55c42d9974d3ca9ddad3587ad5643527627b66d776497a67e225084d052"} Mar 19 17:12:05 crc kubenswrapper[4792]: I0319 17:12:05.083949 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" podStartSLOduration=8.083932191 podStartE2EDuration="8.083932191s" podCreationTimestamp="2026-03-19 17:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:12:05.078893983 +0000 UTC m=+1888.224951523" watchObservedRunningTime="2026-03-19 17:12:05.083932191 +0000 UTC m=+1888.229989731" Mar 19 17:12:05 crc kubenswrapper[4792]: I0319 17:12:05.172653 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 17:12:05 crc kubenswrapper[4792]: W0319 17:12:05.173226 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc585782f_9e4f_4495_9e68_a10aa5fc90b0.slice/crio-612c976963b858dbe40763c34986b306b3dd4b81333b3d8f50987b81c2b3aeba WatchSource:0}: Error finding container 612c976963b858dbe40763c34986b306b3dd4b81333b3d8f50987b81c2b3aeba: Status 404 returned error can't find the container with id 612c976963b858dbe40763c34986b306b3dd4b81333b3d8f50987b81c2b3aeba Mar 19 17:12:05 crc kubenswrapper[4792]: I0319 17:12:05.751495 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886bf823-6964-4a71-807d-2b448201fc5e" path="/var/lib/kubelet/pods/886bf823-6964-4a71-807d-2b448201fc5e/volumes" Mar 19 17:12:06 crc kubenswrapper[4792]: I0319 17:12:06.082722 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" event={"ID":"4d54b5d3-d6b5-428c-9e78-ab45a7af529b","Type":"ContainerStarted","Data":"a5c3fd2de6271e229fb4333296496d4bbac39b478c7e912bfe55fa9efb6e671b"} Mar 19 17:12:06 crc kubenswrapper[4792]: I0319 17:12:06.084297 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c585782f-9e4f-4495-9e68-a10aa5fc90b0","Type":"ContainerStarted","Data":"612c976963b858dbe40763c34986b306b3dd4b81333b3d8f50987b81c2b3aeba"} Mar 19 17:12:06 crc kubenswrapper[4792]: I0319 17:12:06.095251 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" podStartSLOduration=5.086590835 podStartE2EDuration="6.095235283s" podCreationTimestamp="2026-03-19 17:12:00 +0000 UTC" firstStartedPulling="2026-03-19 17:12:03.607561423 +0000 UTC m=+1886.753618963" lastFinishedPulling="2026-03-19 17:12:04.616205871 +0000 UTC m=+1887.762263411" observedRunningTime="2026-03-19 17:12:06.093908646 +0000 UTC m=+1889.239966206" watchObservedRunningTime="2026-03-19 17:12:06.095235283 +0000 UTC m=+1889.241292823" Mar 19 17:12:06 crc kubenswrapper[4792]: I0319 17:12:06.749532 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="8d58d025-e325-4ac1-8bf8-b251ea8ed3f2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: i/o timeout" Mar 19 17:12:07 crc kubenswrapper[4792]: I0319 17:12:07.099699 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ae048e02-6ff7-4fa8-81c0-57ab3c051662","Type":"ContainerStarted","Data":"63030955758dc1f3cb51d9b910d45220399a5f7af31d43ca347fe1d1d75efc9f"} Mar 19 17:12:07 crc kubenswrapper[4792]: I0319 17:12:07.105895 4792 generic.go:334] "Generic (PLEG): container finished" podID="4d54b5d3-d6b5-428c-9e78-ab45a7af529b" containerID="a5c3fd2de6271e229fb4333296496d4bbac39b478c7e912bfe55fa9efb6e671b" exitCode=0 Mar 19 17:12:07 crc kubenswrapper[4792]: I0319 17:12:07.106005 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" event={"ID":"4d54b5d3-d6b5-428c-9e78-ab45a7af529b","Type":"ContainerDied","Data":"a5c3fd2de6271e229fb4333296496d4bbac39b478c7e912bfe55fa9efb6e671b"} Mar 19 17:12:07 crc kubenswrapper[4792]: I0319 17:12:07.111594 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c585782f-9e4f-4495-9e68-a10aa5fc90b0","Type":"ContainerStarted","Data":"6b73273f4d2c9a41c584882cbaad172fbcc5cafae869c19e808e9a8cbb1660cd"} Mar 19 17:12:07 crc kubenswrapper[4792]: I0319 17:12:07.251539 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="886bf823-6964-4a71-807d-2b448201fc5e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: i/o timeout" Mar 19 17:12:08 crc kubenswrapper[4792]: I0319 17:12:08.525667 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" Mar 19 17:12:08 crc kubenswrapper[4792]: I0319 17:12:08.598007 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkwjh\" (UniqueName: \"kubernetes.io/projected/4d54b5d3-d6b5-428c-9e78-ab45a7af529b-kube-api-access-zkwjh\") pod \"4d54b5d3-d6b5-428c-9e78-ab45a7af529b\" (UID: \"4d54b5d3-d6b5-428c-9e78-ab45a7af529b\") " Mar 19 17:12:08 crc kubenswrapper[4792]: I0319 17:12:08.604872 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d54b5d3-d6b5-428c-9e78-ab45a7af529b-kube-api-access-zkwjh" (OuterVolumeSpecName: "kube-api-access-zkwjh") pod "4d54b5d3-d6b5-428c-9e78-ab45a7af529b" (UID: "4d54b5d3-d6b5-428c-9e78-ab45a7af529b"). InnerVolumeSpecName "kube-api-access-zkwjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:08 crc kubenswrapper[4792]: I0319 17:12:08.700782 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkwjh\" (UniqueName: \"kubernetes.io/projected/4d54b5d3-d6b5-428c-9e78-ab45a7af529b-kube-api-access-zkwjh\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:09 crc kubenswrapper[4792]: I0319 17:12:09.154530 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" event={"ID":"4d54b5d3-d6b5-428c-9e78-ab45a7af529b","Type":"ContainerDied","Data":"d518723cb795c6f8d2b067bf9cc550fbf34540eea877bbeaa9786c8d8653e46f"} Mar 19 17:12:09 crc kubenswrapper[4792]: I0319 17:12:09.154944 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d518723cb795c6f8d2b067bf9cc550fbf34540eea877bbeaa9786c8d8653e46f" Mar 19 17:12:09 crc kubenswrapper[4792]: I0319 17:12:09.154763 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565672-7gqn5" Mar 19 17:12:09 crc kubenswrapper[4792]: I0319 17:12:09.171945 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565666-mmx49"] Mar 19 17:12:09 crc kubenswrapper[4792]: I0319 17:12:09.184155 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565666-mmx49"] Mar 19 17:12:09 crc kubenswrapper[4792]: I0319 17:12:09.752300 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175b0c5f-9753-4154-9086-e39e498077e5" path="/var/lib/kubelet/pods/175b0c5f-9753-4154-9086-e39e498077e5/volumes" Mar 19 17:12:10 crc kubenswrapper[4792]: I0319 17:12:10.170123 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b1938b-461b-46fe-9fb9-28e17c7591bc","Type":"ContainerStarted","Data":"8e2452edc531da532a5c9b503cb1c22ca7c5c58a579928984df857e115e9697e"} Mar 19 17:12:10 crc kubenswrapper[4792]: I0319 17:12:10.170165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b1938b-461b-46fe-9fb9-28e17c7591bc","Type":"ContainerStarted","Data":"5af0d393bca190b78fa50c881d4fbcfbcad66edf3876c95ba3eafa8a09d61bc3"} Mar 19 17:12:11 crc kubenswrapper[4792]: I0319 17:12:11.187161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b1938b-461b-46fe-9fb9-28e17c7591bc","Type":"ContainerStarted","Data":"1d2dba0c5ef402f1acac62bb5182f75ce59a1abbf2cb3effabcbe360a90148ca"} Mar 19 17:12:12 crc kubenswrapper[4792]: I0319 17:12:12.636776 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:12:12 crc kubenswrapper[4792]: I0319 17:12:12.767460 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-pwm7z"] Mar 19 17:12:12 crc kubenswrapper[4792]: I0319 17:12:12.767994 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" podUID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" containerName="dnsmasq-dns" containerID="cri-o://d1c2a7bb254c478ed8f69e6fbad542ac1890be53f4ce20a5ebe938d55061184e" gracePeriod=10 Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.112440 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-99s9r"] Mar 19 17:12:13 crc kubenswrapper[4792]: E0319 17:12:13.113006 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d54b5d3-d6b5-428c-9e78-ab45a7af529b" containerName="oc" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.113018 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d54b5d3-d6b5-428c-9e78-ab45a7af529b" containerName="oc" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.113267 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d54b5d3-d6b5-428c-9e78-ab45a7af529b" containerName="oc" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.114594 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.140453 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-99s9r"] Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.224858 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzgd9\" (UniqueName: \"kubernetes.io/projected/e705cf76-1371-4677-80a0-8582f8695a29-kube-api-access-pzgd9\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.224910 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.224947 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-config\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.225033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.225058 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.225170 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.225269 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.232096 4792 generic.go:334] "Generic (PLEG): container finished" podID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" containerID="d1c2a7bb254c478ed8f69e6fbad542ac1890be53f4ce20a5ebe938d55061184e" exitCode=0 Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.232133 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" event={"ID":"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c","Type":"ContainerDied","Data":"d1c2a7bb254c478ed8f69e6fbad542ac1890be53f4ce20a5ebe938d55061184e"} Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.330880 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.332402 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.332671 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzgd9\" (UniqueName: \"kubernetes.io/projected/e705cf76-1371-4677-80a0-8582f8695a29-kube-api-access-pzgd9\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.332718 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.332854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-config\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.333107 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.333176 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.333315 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.334264 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.334802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-config\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.335531 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.335866 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.336410 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e705cf76-1371-4677-80a0-8582f8695a29-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: E0319 17:12:13.370603 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e0ddd9c_4760_4f3e_bb07_ac8aedbb145c.slice/crio-conmon-d1c2a7bb254c478ed8f69e6fbad542ac1890be53f4ce20a5ebe938d55061184e.scope\": RecentStats: unable to find data in memory cache]" Mar 19 17:12:13 crc kubenswrapper[4792]: E0319 17:12:13.372203 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e0ddd9c_4760_4f3e_bb07_ac8aedbb145c.slice/crio-conmon-d1c2a7bb254c478ed8f69e6fbad542ac1890be53f4ce20a5ebe938d55061184e.scope\": RecentStats: unable to find data in memory cache]" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.375121 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzgd9\" (UniqueName: \"kubernetes.io/projected/e705cf76-1371-4677-80a0-8582f8695a29-kube-api-access-pzgd9\") pod \"dnsmasq-dns-5596c69fcc-99s9r\" (UID: \"e705cf76-1371-4677-80a0-8582f8695a29\") " pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:13 crc kubenswrapper[4792]: I0319 17:12:13.687999 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.247492 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b1938b-461b-46fe-9fb9-28e17c7591bc","Type":"ContainerStarted","Data":"f1d8c3469cb7ede797e8d762b78b7e7bb90e9fd944401cb8759bea6144dbea3b"} Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.247715 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.284378 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.794323381 podStartE2EDuration="19.284359217s" podCreationTimestamp="2026-03-19 17:11:55 +0000 UTC" firstStartedPulling="2026-03-19 17:12:03.153671853 +0000 UTC m=+1886.299729393" lastFinishedPulling="2026-03-19 17:12:12.643707689 +0000 UTC m=+1895.789765229" observedRunningTime="2026-03-19 17:12:14.272939204 +0000 UTC m=+1897.418996744" watchObservedRunningTime="2026-03-19 17:12:14.284359217 +0000 UTC m=+1897.430416757" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.651594 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.820644 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-swift-storage-0\") pod \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.820771 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-config\") pod \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.820878 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-sb\") pod \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.820959 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgt4f\" (UniqueName: \"kubernetes.io/projected/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-kube-api-access-dgt4f\") pod \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.821170 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-nb\") pod \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.821218 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-svc\") pod \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\" (UID: \"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c\") " Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.839005 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-kube-api-access-dgt4f" (OuterVolumeSpecName: "kube-api-access-dgt4f") pod "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" (UID: "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c"). InnerVolumeSpecName "kube-api-access-dgt4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.925619 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgt4f\" (UniqueName: \"kubernetes.io/projected/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-kube-api-access-dgt4f\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.953618 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" (UID: "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.960656 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" (UID: "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.977445 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" (UID: "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.982244 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-config" (OuterVolumeSpecName: "config") pod "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" (UID: "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:14 crc kubenswrapper[4792]: I0319 17:12:14.987690 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" (UID: "2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:15 crc kubenswrapper[4792]: W0319 17:12:15.028968 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode705cf76_1371_4677_80a0_8582f8695a29.slice/crio-1ac90cb2c75b91b3d7f4f04037db67b459ec9da739407ca56a74bbfdda37c05f WatchSource:0}: Error finding container 1ac90cb2c75b91b3d7f4f04037db67b459ec9da739407ca56a74bbfdda37c05f: Status 404 returned error can't find the container with id 1ac90cb2c75b91b3d7f4f04037db67b459ec9da739407ca56a74bbfdda37c05f Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.029911 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.030035 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.030109 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.030173 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.030236 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.042671 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-99s9r"] Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.263147 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" event={"ID":"e705cf76-1371-4677-80a0-8582f8695a29","Type":"ContainerStarted","Data":"1ac90cb2c75b91b3d7f4f04037db67b459ec9da739407ca56a74bbfdda37c05f"} Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.266619 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" event={"ID":"2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c","Type":"ContainerDied","Data":"d3e62a0c8412fc09bfecbe98d052adb0e659a85df1d9c83769e2823604e31cf1"} Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.266668 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-pwm7z" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.266681 4792 scope.go:117] "RemoveContainer" containerID="d1c2a7bb254c478ed8f69e6fbad542ac1890be53f4ce20a5ebe938d55061184e" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.310389 4792 scope.go:117] "RemoveContainer" containerID="500ffbd3ada82aae55aef4e73f7b140332734416488bdd4fd49bb0859faa14fa" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.335810 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-pwm7z"] Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.350735 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-pwm7z"] Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.740241 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:12:15 crc kubenswrapper[4792]: E0319 17:12:15.741216 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:12:15 crc kubenswrapper[4792]: I0319 17:12:15.756227 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" path="/var/lib/kubelet/pods/2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c/volumes" Mar 19 17:12:16 crc kubenswrapper[4792]: I0319 17:12:16.277917 4792 generic.go:334] "Generic (PLEG): container finished" podID="e705cf76-1371-4677-80a0-8582f8695a29" containerID="a3aec71e1e8005d8ad2e18dd65a03a7139a4db39a2512e597b1232bc8e22a85c" exitCode=0 Mar 19 17:12:16 crc kubenswrapper[4792]: I0319 17:12:16.278027 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" event={"ID":"e705cf76-1371-4677-80a0-8582f8695a29","Type":"ContainerDied","Data":"a3aec71e1e8005d8ad2e18dd65a03a7139a4db39a2512e597b1232bc8e22a85c"} Mar 19 17:12:16 crc kubenswrapper[4792]: I0319 17:12:16.280951 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6dwwv" event={"ID":"a5ee639b-34bf-4824-902d-e38af5ad4527","Type":"ContainerStarted","Data":"ae006aa0b5bfafe805c62a798245f2fc0ae0a34d6ff2a79ec2f8baa42508d5b3"} Mar 19 17:12:16 crc kubenswrapper[4792]: I0319 17:12:16.340433 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-6dwwv" podStartSLOduration=3.021627485 podStartE2EDuration="36.340409369s" podCreationTimestamp="2026-03-19 17:11:40 +0000 UTC" firstStartedPulling="2026-03-19 17:11:41.759133848 +0000 UTC m=+1864.905191388" lastFinishedPulling="2026-03-19 17:12:15.077915732 +0000 UTC m=+1898.223973272" observedRunningTime="2026-03-19 17:12:16.326104936 +0000 UTC m=+1899.472162466" watchObservedRunningTime="2026-03-19 17:12:16.340409369 +0000 UTC m=+1899.486466909" Mar 19 17:12:17 crc kubenswrapper[4792]: I0319 17:12:17.299538 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" event={"ID":"e705cf76-1371-4677-80a0-8582f8695a29","Type":"ContainerStarted","Data":"07692359fea76a0ceffe931375c0b92e752cf04a3d94bc538fe0a1ef87d8c97c"} Mar 19 17:12:17 crc kubenswrapper[4792]: I0319 17:12:17.302580 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:17 crc kubenswrapper[4792]: I0319 17:12:17.340867 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" podStartSLOduration=4.340829122 podStartE2EDuration="4.340829122s" podCreationTimestamp="2026-03-19 17:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:12:17.339345331 +0000 UTC m=+1900.485402881" watchObservedRunningTime="2026-03-19 17:12:17.340829122 +0000 UTC m=+1900.486886662" Mar 19 17:12:18 crc kubenswrapper[4792]: I0319 17:12:18.312117 4792 generic.go:334] "Generic (PLEG): container finished" podID="a5ee639b-34bf-4824-902d-e38af5ad4527" containerID="ae006aa0b5bfafe805c62a798245f2fc0ae0a34d6ff2a79ec2f8baa42508d5b3" exitCode=0 Mar 19 17:12:18 crc kubenswrapper[4792]: I0319 17:12:18.312334 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6dwwv" event={"ID":"a5ee639b-34bf-4824-902d-e38af5ad4527","Type":"ContainerDied","Data":"ae006aa0b5bfafe805c62a798245f2fc0ae0a34d6ff2a79ec2f8baa42508d5b3"} Mar 19 17:12:19 crc kubenswrapper[4792]: I0319 17:12:19.820158 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6dwwv" Mar 19 17:12:19 crc kubenswrapper[4792]: I0319 17:12:19.856122 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-config-data\") pod \"a5ee639b-34bf-4824-902d-e38af5ad4527\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " Mar 19 17:12:19 crc kubenswrapper[4792]: I0319 17:12:19.856241 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-combined-ca-bundle\") pod \"a5ee639b-34bf-4824-902d-e38af5ad4527\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " Mar 19 17:12:19 crc kubenswrapper[4792]: I0319 17:12:19.856363 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qbdz\" (UniqueName: \"kubernetes.io/projected/a5ee639b-34bf-4824-902d-e38af5ad4527-kube-api-access-8qbdz\") pod \"a5ee639b-34bf-4824-902d-e38af5ad4527\" (UID: \"a5ee639b-34bf-4824-902d-e38af5ad4527\") " Mar 19 17:12:19 crc kubenswrapper[4792]: I0319 17:12:19.889045 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ee639b-34bf-4824-902d-e38af5ad4527-kube-api-access-8qbdz" (OuterVolumeSpecName: "kube-api-access-8qbdz") pod "a5ee639b-34bf-4824-902d-e38af5ad4527" (UID: "a5ee639b-34bf-4824-902d-e38af5ad4527"). InnerVolumeSpecName "kube-api-access-8qbdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:19 crc kubenswrapper[4792]: I0319 17:12:19.956032 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5ee639b-34bf-4824-902d-e38af5ad4527" (UID: "a5ee639b-34bf-4824-902d-e38af5ad4527"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:19 crc kubenswrapper[4792]: I0319 17:12:19.961453 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qbdz\" (UniqueName: \"kubernetes.io/projected/a5ee639b-34bf-4824-902d-e38af5ad4527-kube-api-access-8qbdz\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:19 crc kubenswrapper[4792]: I0319 17:12:19.965661 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:20 crc kubenswrapper[4792]: I0319 17:12:20.061765 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-config-data" (OuterVolumeSpecName: "config-data") pod "a5ee639b-34bf-4824-902d-e38af5ad4527" (UID: "a5ee639b-34bf-4824-902d-e38af5ad4527"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:20 crc kubenswrapper[4792]: I0319 17:12:20.070766 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ee639b-34bf-4824-902d-e38af5ad4527-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:20 crc kubenswrapper[4792]: I0319 17:12:20.337081 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6dwwv" event={"ID":"a5ee639b-34bf-4824-902d-e38af5ad4527","Type":"ContainerDied","Data":"25081d3fdae32978d62c029c1fcd73959e1f37851d5e2d5d2472bb4e7ea4501d"} Mar 19 17:12:20 crc kubenswrapper[4792]: I0319 17:12:20.337125 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6dwwv" Mar 19 17:12:20 crc kubenswrapper[4792]: I0319 17:12:20.337128 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25081d3fdae32978d62c029c1fcd73959e1f37851d5e2d5d2472bb4e7ea4501d" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.763784 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6cf6b5d9c8-vct92"] Mar 19 17:12:21 crc kubenswrapper[4792]: E0319 17:12:21.765031 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ee639b-34bf-4824-902d-e38af5ad4527" containerName="heat-db-sync" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.765054 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ee639b-34bf-4824-902d-e38af5ad4527" containerName="heat-db-sync" Mar 19 17:12:21 crc kubenswrapper[4792]: E0319 17:12:21.765084 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" containerName="dnsmasq-dns" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.765094 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" containerName="dnsmasq-dns" Mar 19 17:12:21 crc kubenswrapper[4792]: E0319 17:12:21.765115 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" containerName="init" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.765123 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" containerName="init" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.765458 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0ddd9c-4760-4f3e-bb07-ac8aedbb145c" containerName="dnsmasq-dns" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.765474 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ee639b-34bf-4824-902d-e38af5ad4527" containerName="heat-db-sync" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.766648 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.780547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6cf6b5d9c8-vct92"] Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.813675 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-config-data-custom\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.813752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8npbs\" (UniqueName: \"kubernetes.io/projected/737815ac-f033-4b2b-be52-1418b60262ed-kube-api-access-8npbs\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.813922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-combined-ca-bundle\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.814383 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-config-data\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.835920 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6bf7b6486c-5hf6r"] Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.837629 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.856186 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7d76c97877-4zbfk"] Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.858146 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.881547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d76c97877-4zbfk"] Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.894993 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6bf7b6486c-5hf6r"] Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.920592 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-config-data-custom\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.920653 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-combined-ca-bundle\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.920741 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn4dn\" (UniqueName: \"kubernetes.io/projected/e97170a9-754d-4a31-a542-fc6336f483bb-kube-api-access-tn4dn\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.920765 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-public-tls-certs\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.920975 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-internal-tls-certs\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.921670 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-internal-tls-certs\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.921717 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-config-data\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.921779 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-config-data-custom\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.921958 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-config-data\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.922023 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-public-tls-certs\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.922065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-combined-ca-bundle\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.922103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-config-data-custom\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.922145 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8npbs\" (UniqueName: \"kubernetes.io/projected/737815ac-f033-4b2b-be52-1418b60262ed-kube-api-access-8npbs\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.922248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrzm\" (UniqueName: \"kubernetes.io/projected/bf529b70-4061-4841-ae2a-553db6001e83-kube-api-access-bqrzm\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.922371 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-combined-ca-bundle\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.922391 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-config-data\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.933725 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-combined-ca-bundle\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.942304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-config-data\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.943364 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8npbs\" (UniqueName: \"kubernetes.io/projected/737815ac-f033-4b2b-be52-1418b60262ed-kube-api-access-8npbs\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:21 crc kubenswrapper[4792]: I0319 17:12:21.949600 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/737815ac-f033-4b2b-be52-1418b60262ed-config-data-custom\") pod \"heat-engine-6cf6b5d9c8-vct92\" (UID: \"737815ac-f033-4b2b-be52-1418b60262ed\") " pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.024792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-internal-tls-certs\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.024952 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-internal-tls-certs\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.024986 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-config-data\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025026 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-config-data-custom\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025070 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-public-tls-certs\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-combined-ca-bundle\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrzm\" (UniqueName: \"kubernetes.io/projected/bf529b70-4061-4841-ae2a-553db6001e83-kube-api-access-bqrzm\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-config-data\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-config-data-custom\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025279 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-combined-ca-bundle\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025337 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn4dn\" (UniqueName: \"kubernetes.io/projected/e97170a9-754d-4a31-a542-fc6336f483bb-kube-api-access-tn4dn\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.025362 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-public-tls-certs\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.030721 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-internal-tls-certs\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.031047 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-config-data\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.031153 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-public-tls-certs\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.032449 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-config-data\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.033175 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-public-tls-certs\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.034705 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-config-data-custom\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.037479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-config-data-custom\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.042602 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-internal-tls-certs\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.045022 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97170a9-754d-4a31-a542-fc6336f483bb-combined-ca-bundle\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.045231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf529b70-4061-4841-ae2a-553db6001e83-combined-ca-bundle\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.045938 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrzm\" (UniqueName: \"kubernetes.io/projected/bf529b70-4061-4841-ae2a-553db6001e83-kube-api-access-bqrzm\") pod \"heat-api-6bf7b6486c-5hf6r\" (UID: \"bf529b70-4061-4841-ae2a-553db6001e83\") " pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.048374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn4dn\" (UniqueName: \"kubernetes.io/projected/e97170a9-754d-4a31-a542-fc6336f483bb-kube-api-access-tn4dn\") pod \"heat-cfnapi-7d76c97877-4zbfk\" (UID: \"e97170a9-754d-4a31-a542-fc6336f483bb\") " pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.094731 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.174227 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.190421 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:22 crc kubenswrapper[4792]: W0319 17:12:22.658072 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod737815ac_f033_4b2b_be52_1418b60262ed.slice/crio-2dc8476775cbda6674a5e16a65dea907d1b55efdb5e1ae5e3431080aa53fcb44 WatchSource:0}: Error finding container 2dc8476775cbda6674a5e16a65dea907d1b55efdb5e1ae5e3431080aa53fcb44: Status 404 returned error can't find the container with id 2dc8476775cbda6674a5e16a65dea907d1b55efdb5e1ae5e3431080aa53fcb44 Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.674433 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6cf6b5d9c8-vct92"] Mar 19 17:12:22 crc kubenswrapper[4792]: W0319 17:12:22.829100 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode97170a9_754d_4a31_a542_fc6336f483bb.slice/crio-c8080f2648a552b8a4e2d243c3b5f5d4fc040e08e942cefcfb4e003731ebab08 WatchSource:0}: Error finding container c8080f2648a552b8a4e2d243c3b5f5d4fc040e08e942cefcfb4e003731ebab08: Status 404 returned error can't find the container with id c8080f2648a552b8a4e2d243c3b5f5d4fc040e08e942cefcfb4e003731ebab08 Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.832814 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6bf7b6486c-5hf6r"] Mar 19 17:12:22 crc kubenswrapper[4792]: W0319 17:12:22.835724 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf529b70_4061_4841_ae2a_553db6001e83.slice/crio-31f2c8b2b4bf750e56d4d48c087dac1454c53a89c3bbaabf13437cee0ce53a94 WatchSource:0}: Error finding container 31f2c8b2b4bf750e56d4d48c087dac1454c53a89c3bbaabf13437cee0ce53a94: Status 404 returned error can't find the container with id 31f2c8b2b4bf750e56d4d48c087dac1454c53a89c3bbaabf13437cee0ce53a94 Mar 19 17:12:22 crc kubenswrapper[4792]: I0319 17:12:22.847818 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d76c97877-4zbfk"] Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.390299 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bf7b6486c-5hf6r" event={"ID":"bf529b70-4061-4841-ae2a-553db6001e83","Type":"ContainerStarted","Data":"31f2c8b2b4bf750e56d4d48c087dac1454c53a89c3bbaabf13437cee0ce53a94"} Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.392533 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d76c97877-4zbfk" event={"ID":"e97170a9-754d-4a31-a542-fc6336f483bb","Type":"ContainerStarted","Data":"c8080f2648a552b8a4e2d243c3b5f5d4fc040e08e942cefcfb4e003731ebab08"} Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.394765 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6b5d9c8-vct92" event={"ID":"737815ac-f033-4b2b-be52-1418b60262ed","Type":"ContainerStarted","Data":"2e30a06a752cf90ff88d9349be0c4738a23641fd37aec2a63af9bc293ccc909d"} Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.394795 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6b5d9c8-vct92" event={"ID":"737815ac-f033-4b2b-be52-1418b60262ed","Type":"ContainerStarted","Data":"2dc8476775cbda6674a5e16a65dea907d1b55efdb5e1ae5e3431080aa53fcb44"} Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.395057 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.689976 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-99s9r" Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.740125 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6cf6b5d9c8-vct92" podStartSLOduration=2.740099535 podStartE2EDuration="2.740099535s" podCreationTimestamp="2026-03-19 17:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:12:23.411162365 +0000 UTC m=+1906.557219905" watchObservedRunningTime="2026-03-19 17:12:23.740099535 +0000 UTC m=+1906.886157075" Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.809757 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-phbpx"] Mar 19 17:12:23 crc kubenswrapper[4792]: I0319 17:12:23.810142 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" podUID="ee0d117f-a568-4077-a594-bcba45f1188c" containerName="dnsmasq-dns" containerID="cri-o://9fd6d7b69a40542fd9645fa8183e37c233b95f46a7d600f117e2e0f56f21b4a0" gracePeriod=10 Mar 19 17:12:24 crc kubenswrapper[4792]: I0319 17:12:24.430422 4792 generic.go:334] "Generic (PLEG): container finished" podID="ee0d117f-a568-4077-a594-bcba45f1188c" containerID="9fd6d7b69a40542fd9645fa8183e37c233b95f46a7d600f117e2e0f56f21b4a0" exitCode=0 Mar 19 17:12:24 crc kubenswrapper[4792]: I0319 17:12:24.430643 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" event={"ID":"ee0d117f-a568-4077-a594-bcba45f1188c","Type":"ContainerDied","Data":"9fd6d7b69a40542fd9645fa8183e37c233b95f46a7d600f117e2e0f56f21b4a0"} Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.093060 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.125906 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-swift-storage-0\") pod \"ee0d117f-a568-4077-a594-bcba45f1188c\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.125986 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-svc\") pod \"ee0d117f-a568-4077-a594-bcba45f1188c\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.126064 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-sb\") pod \"ee0d117f-a568-4077-a594-bcba45f1188c\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.126269 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-nb\") pod \"ee0d117f-a568-4077-a594-bcba45f1188c\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.126306 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-openstack-edpm-ipam\") pod \"ee0d117f-a568-4077-a594-bcba45f1188c\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.126335 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-config\") pod \"ee0d117f-a568-4077-a594-bcba45f1188c\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.126371 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f7rv\" (UniqueName: \"kubernetes.io/projected/ee0d117f-a568-4077-a594-bcba45f1188c-kube-api-access-6f7rv\") pod \"ee0d117f-a568-4077-a594-bcba45f1188c\" (UID: \"ee0d117f-a568-4077-a594-bcba45f1188c\") " Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.192267 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0d117f-a568-4077-a594-bcba45f1188c-kube-api-access-6f7rv" (OuterVolumeSpecName: "kube-api-access-6f7rv") pod "ee0d117f-a568-4077-a594-bcba45f1188c" (UID: "ee0d117f-a568-4077-a594-bcba45f1188c"). InnerVolumeSpecName "kube-api-access-6f7rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.232871 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f7rv\" (UniqueName: \"kubernetes.io/projected/ee0d117f-a568-4077-a594-bcba45f1188c-kube-api-access-6f7rv\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.357811 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-config" (OuterVolumeSpecName: "config") pod "ee0d117f-a568-4077-a594-bcba45f1188c" (UID: "ee0d117f-a568-4077-a594-bcba45f1188c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.365832 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee0d117f-a568-4077-a594-bcba45f1188c" (UID: "ee0d117f-a568-4077-a594-bcba45f1188c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.379472 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ee0d117f-a568-4077-a594-bcba45f1188c" (UID: "ee0d117f-a568-4077-a594-bcba45f1188c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.384347 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee0d117f-a568-4077-a594-bcba45f1188c" (UID: "ee0d117f-a568-4077-a594-bcba45f1188c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.390417 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee0d117f-a568-4077-a594-bcba45f1188c" (UID: "ee0d117f-a568-4077-a594-bcba45f1188c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.390892 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee0d117f-a568-4077-a594-bcba45f1188c" (UID: "ee0d117f-a568-4077-a594-bcba45f1188c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.438259 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.438294 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.438306 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.438320 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-config\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.438333 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.438343 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee0d117f-a568-4077-a594-bcba45f1188c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.445248 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bf7b6486c-5hf6r" event={"ID":"bf529b70-4061-4841-ae2a-553db6001e83","Type":"ContainerStarted","Data":"b2c47d354b30c41904c27de99486b7faeea7187a744210cbd1e1ab9287f6863a"} Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.445466 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.447575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d76c97877-4zbfk" event={"ID":"e97170a9-754d-4a31-a542-fc6336f483bb","Type":"ContainerStarted","Data":"2d9fcd308867150ea733ce45b007119c38e6d47ca26190617095d9eebcdf64b8"} Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.448013 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.450183 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" event={"ID":"ee0d117f-a568-4077-a594-bcba45f1188c","Type":"ContainerDied","Data":"c395318a98aa0ce1e1f43d8304799a588011f9ddda1aecc470506d450bdd318c"} Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.450230 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-phbpx" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.450242 4792 scope.go:117] "RemoveContainer" containerID="9fd6d7b69a40542fd9645fa8183e37c233b95f46a7d600f117e2e0f56f21b4a0" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.500883 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6bf7b6486c-5hf6r" podStartSLOduration=2.45292959 podStartE2EDuration="4.50085876s" podCreationTimestamp="2026-03-19 17:12:21 +0000 UTC" firstStartedPulling="2026-03-19 17:12:22.839367837 +0000 UTC m=+1905.985425377" lastFinishedPulling="2026-03-19 17:12:24.887297007 +0000 UTC m=+1908.033354547" observedRunningTime="2026-03-19 17:12:25.467267267 +0000 UTC m=+1908.613324817" watchObservedRunningTime="2026-03-19 17:12:25.50085876 +0000 UTC m=+1908.646916320" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.522968 4792 scope.go:117] "RemoveContainer" containerID="ec875f6218ae8647de83c04f5c266598b2ac2431642b24390150f7253e2cc997" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.557425 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7d76c97877-4zbfk" podStartSLOduration=2.50315782 podStartE2EDuration="4.557393032s" podCreationTimestamp="2026-03-19 17:12:21 +0000 UTC" firstStartedPulling="2026-03-19 17:12:22.831496722 +0000 UTC m=+1905.977554262" lastFinishedPulling="2026-03-19 17:12:24.885731934 +0000 UTC m=+1908.031789474" observedRunningTime="2026-03-19 17:12:25.500359016 +0000 UTC m=+1908.646416566" watchObservedRunningTime="2026-03-19 17:12:25.557393032 +0000 UTC m=+1908.703450572" Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.653523 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-phbpx"] Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.672644 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-phbpx"] Mar 19 17:12:25 crc kubenswrapper[4792]: I0319 17:12:25.762613 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0d117f-a568-4077-a594-bcba45f1188c" path="/var/lib/kubelet/pods/ee0d117f-a568-4077-a594-bcba45f1188c/volumes" Mar 19 17:12:26 crc kubenswrapper[4792]: I0319 17:12:26.221581 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 19 17:12:26 crc kubenswrapper[4792]: I0319 17:12:26.740441 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:12:27 crc kubenswrapper[4792]: I0319 17:12:27.480713 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"df04e1de6332da5f3e8a2d9492121d71dee7eaef8067de758696cf9c7212edb6"} Mar 19 17:12:32 crc kubenswrapper[4792]: I0319 17:12:32.142581 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6cf6b5d9c8-vct92" Mar 19 17:12:32 crc kubenswrapper[4792]: I0319 17:12:32.193197 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7bcd68ccb9-rjwmx"] Mar 19 17:12:32 crc kubenswrapper[4792]: I0319 17:12:32.193411 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" podUID="9318ba4f-8979-46fa-8cb4-e1c12ee94e35" containerName="heat-engine" containerID="cri-o://97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" gracePeriod=60 Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.142308 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk"] Mar 19 17:12:34 crc kubenswrapper[4792]: E0319 17:12:34.144600 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0d117f-a568-4077-a594-bcba45f1188c" containerName="init" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.144743 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0d117f-a568-4077-a594-bcba45f1188c" containerName="init" Mar 19 17:12:34 crc kubenswrapper[4792]: E0319 17:12:34.144938 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0d117f-a568-4077-a594-bcba45f1188c" containerName="dnsmasq-dns" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.145037 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0d117f-a568-4077-a594-bcba45f1188c" containerName="dnsmasq-dns" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.145408 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0d117f-a568-4077-a594-bcba45f1188c" containerName="dnsmasq-dns" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.146667 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.151185 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.151333 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.162419 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.162575 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.187807 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk"] Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.239297 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.239457 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntlwt\" (UniqueName: \"kubernetes.io/projected/50866ef3-6742-4a83-a766-2c075a8d45cb-kube-api-access-ntlwt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.239644 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.239675 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.318237 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6bf7b6486c-5hf6r" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.341678 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.341822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntlwt\" (UniqueName: \"kubernetes.io/projected/50866ef3-6742-4a83-a766-2c075a8d45cb-kube-api-access-ntlwt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.341903 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.341929 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.349824 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.359095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.361989 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.367579 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntlwt\" (UniqueName: \"kubernetes.io/projected/50866ef3-6742-4a83-a766-2c075a8d45cb-kube-api-access-ntlwt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.396242 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5f6467b4f6-xl4lw"] Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.396446 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5f6467b4f6-xl4lw" podUID="a287def6-0542-42d7-bf64-dca21b2bd57b" containerName="heat-api" containerID="cri-o://0f5a4c19f982a0989895ea3de8070e105bb11f88f371c336cdcaf224a07e8247" gracePeriod=60 Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.488520 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.551373 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7d76c97877-4zbfk" Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.634405 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c6dcb76d4-jdvrw"] Mar 19 17:12:34 crc kubenswrapper[4792]: I0319 17:12:34.634693 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" podUID="7a16c447-44d2-4bba-ad99-aa5893891486" containerName="heat-cfnapi" containerID="cri-o://42f1fd9797cd24239c8bab1e7501926cf12f9cce920f9d94c0561ac1f7227878" gracePeriod=60 Mar 19 17:12:35 crc kubenswrapper[4792]: I0319 17:12:35.431607 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk"] Mar 19 17:12:35 crc kubenswrapper[4792]: W0319 17:12:35.435100 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50866ef3_6742_4a83_a766_2c075a8d45cb.slice/crio-7d9dbfaee6e5f329bcf7ca3b3cb89d726a0152116124ec5d151922b21a2b5253 WatchSource:0}: Error finding container 7d9dbfaee6e5f329bcf7ca3b3cb89d726a0152116124ec5d151922b21a2b5253: Status 404 returned error can't find the container with id 7d9dbfaee6e5f329bcf7ca3b3cb89d726a0152116124ec5d151922b21a2b5253 Mar 19 17:12:35 crc kubenswrapper[4792]: I0319 17:12:35.578616 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" event={"ID":"50866ef3-6742-4a83-a766-2c075a8d45cb","Type":"ContainerStarted","Data":"7d9dbfaee6e5f329bcf7ca3b3cb89d726a0152116124ec5d151922b21a2b5253"} Mar 19 17:12:38 crc kubenswrapper[4792]: I0319 17:12:38.072138 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5f6467b4f6-xl4lw" podUID="a287def6-0542-42d7-bf64-dca21b2bd57b" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.238:8004/healthcheck\": dial tcp 10.217.0.238:8004: connect: connection refused" Mar 19 17:12:38 crc kubenswrapper[4792]: E0319 17:12:38.137318 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:12:38 crc kubenswrapper[4792]: E0319 17:12:38.139036 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:12:38 crc kubenswrapper[4792]: E0319 17:12:38.142611 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:12:38 crc kubenswrapper[4792]: E0319 17:12:38.142654 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" podUID="9318ba4f-8979-46fa-8cb4-e1c12ee94e35" containerName="heat-engine" Mar 19 17:12:38 crc kubenswrapper[4792]: I0319 17:12:38.192012 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" podUID="7a16c447-44d2-4bba-ad99-aa5893891486" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.239:8000/healthcheck\": dial tcp 10.217.0.239:8000: connect: connection refused" Mar 19 17:12:38 crc kubenswrapper[4792]: I0319 17:12:38.631137 4792 generic.go:334] "Generic (PLEG): container finished" podID="7a16c447-44d2-4bba-ad99-aa5893891486" containerID="42f1fd9797cd24239c8bab1e7501926cf12f9cce920f9d94c0561ac1f7227878" exitCode=0 Mar 19 17:12:38 crc kubenswrapper[4792]: I0319 17:12:38.631205 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" event={"ID":"7a16c447-44d2-4bba-ad99-aa5893891486","Type":"ContainerDied","Data":"42f1fd9797cd24239c8bab1e7501926cf12f9cce920f9d94c0561ac1f7227878"} Mar 19 17:12:38 crc kubenswrapper[4792]: I0319 17:12:38.634297 4792 generic.go:334] "Generic (PLEG): container finished" podID="a287def6-0542-42d7-bf64-dca21b2bd57b" containerID="0f5a4c19f982a0989895ea3de8070e105bb11f88f371c336cdcaf224a07e8247" exitCode=0 Mar 19 17:12:38 crc kubenswrapper[4792]: I0319 17:12:38.634341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f6467b4f6-xl4lw" event={"ID":"a287def6-0542-42d7-bf64-dca21b2bd57b","Type":"ContainerDied","Data":"0f5a4c19f982a0989895ea3de8070e105bb11f88f371c336cdcaf224a07e8247"} Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.651624 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" event={"ID":"7a16c447-44d2-4bba-ad99-aa5893891486","Type":"ContainerDied","Data":"f96f926f026f19334113d842b92a36a1254d83e257c3021a6d0403b328c33ad1"} Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.651983 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f96f926f026f19334113d842b92a36a1254d83e257c3021a6d0403b328c33ad1" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.653651 4792 generic.go:334] "Generic (PLEG): container finished" podID="ae048e02-6ff7-4fa8-81c0-57ab3c051662" containerID="63030955758dc1f3cb51d9b910d45220399a5f7af31d43ca347fe1d1d75efc9f" exitCode=0 Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.653716 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ae048e02-6ff7-4fa8-81c0-57ab3c051662","Type":"ContainerDied","Data":"63030955758dc1f3cb51d9b910d45220399a5f7af31d43ca347fe1d1d75efc9f"} Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.657024 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f6467b4f6-xl4lw" event={"ID":"a287def6-0542-42d7-bf64-dca21b2bd57b","Type":"ContainerDied","Data":"4487a7f3ecc86a47b3f11ac9a2d44ddc03f3fcf77efa79023f10f34aa21c6ec8"} Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.657074 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4487a7f3ecc86a47b3f11ac9a2d44ddc03f3fcf77efa79023f10f34aa21c6ec8" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.665590 4792 generic.go:334] "Generic (PLEG): container finished" podID="c585782f-9e4f-4495-9e68-a10aa5fc90b0" containerID="6b73273f4d2c9a41c584882cbaad172fbcc5cafae869c19e808e9a8cbb1660cd" exitCode=0 Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.665642 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c585782f-9e4f-4495-9e68-a10aa5fc90b0","Type":"ContainerDied","Data":"6b73273f4d2c9a41c584882cbaad172fbcc5cafae869c19e808e9a8cbb1660cd"} Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.703000 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.706340 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792320 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-combined-ca-bundle\") pod \"7a16c447-44d2-4bba-ad99-aa5893891486\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792372 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data\") pod \"a287def6-0542-42d7-bf64-dca21b2bd57b\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792464 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data-custom\") pod \"7a16c447-44d2-4bba-ad99-aa5893891486\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792534 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-public-tls-certs\") pod \"7a16c447-44d2-4bba-ad99-aa5893891486\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792561 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-internal-tls-certs\") pod \"7a16c447-44d2-4bba-ad99-aa5893891486\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792597 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data\") pod \"7a16c447-44d2-4bba-ad99-aa5893891486\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792692 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz6fg\" (UniqueName: \"kubernetes.io/projected/a287def6-0542-42d7-bf64-dca21b2bd57b-kube-api-access-tz6fg\") pod \"a287def6-0542-42d7-bf64-dca21b2bd57b\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792791 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-combined-ca-bundle\") pod \"a287def6-0542-42d7-bf64-dca21b2bd57b\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792818 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74l96\" (UniqueName: \"kubernetes.io/projected/7a16c447-44d2-4bba-ad99-aa5893891486-kube-api-access-74l96\") pod \"7a16c447-44d2-4bba-ad99-aa5893891486\" (UID: \"7a16c447-44d2-4bba-ad99-aa5893891486\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792880 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-internal-tls-certs\") pod \"a287def6-0542-42d7-bf64-dca21b2bd57b\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.792948 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-public-tls-certs\") pod \"a287def6-0542-42d7-bf64-dca21b2bd57b\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.793035 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data-custom\") pod \"a287def6-0542-42d7-bf64-dca21b2bd57b\" (UID: \"a287def6-0542-42d7-bf64-dca21b2bd57b\") " Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.801781 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a16c447-44d2-4bba-ad99-aa5893891486-kube-api-access-74l96" (OuterVolumeSpecName: "kube-api-access-74l96") pod "7a16c447-44d2-4bba-ad99-aa5893891486" (UID: "7a16c447-44d2-4bba-ad99-aa5893891486"). InnerVolumeSpecName "kube-api-access-74l96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.865135 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7a16c447-44d2-4bba-ad99-aa5893891486" (UID: "7a16c447-44d2-4bba-ad99-aa5893891486"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.865244 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a287def6-0542-42d7-bf64-dca21b2bd57b" (UID: "a287def6-0542-42d7-bf64-dca21b2bd57b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.870711 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a287def6-0542-42d7-bf64-dca21b2bd57b-kube-api-access-tz6fg" (OuterVolumeSpecName: "kube-api-access-tz6fg") pod "a287def6-0542-42d7-bf64-dca21b2bd57b" (UID: "a287def6-0542-42d7-bf64-dca21b2bd57b"). InnerVolumeSpecName "kube-api-access-tz6fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.899646 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz6fg\" (UniqueName: \"kubernetes.io/projected/a287def6-0542-42d7-bf64-dca21b2bd57b-kube-api-access-tz6fg\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.900125 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74l96\" (UniqueName: \"kubernetes.io/projected/7a16c447-44d2-4bba-ad99-aa5893891486-kube-api-access-74l96\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.900181 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:39 crc kubenswrapper[4792]: I0319 17:12:39.900195 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.031055 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a16c447-44d2-4bba-ad99-aa5893891486" (UID: "7a16c447-44d2-4bba-ad99-aa5893891486"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.049064 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7a16c447-44d2-4bba-ad99-aa5893891486" (UID: "7a16c447-44d2-4bba-ad99-aa5893891486"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.123426 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.123464 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.135293 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a287def6-0542-42d7-bf64-dca21b2bd57b" (UID: "a287def6-0542-42d7-bf64-dca21b2bd57b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.226093 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.262101 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data" (OuterVolumeSpecName: "config-data") pod "a287def6-0542-42d7-bf64-dca21b2bd57b" (UID: "a287def6-0542-42d7-bf64-dca21b2bd57b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.278810 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data" (OuterVolumeSpecName: "config-data") pod "7a16c447-44d2-4bba-ad99-aa5893891486" (UID: "7a16c447-44d2-4bba-ad99-aa5893891486"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.286822 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a16c447-44d2-4bba-ad99-aa5893891486" (UID: "7a16c447-44d2-4bba-ad99-aa5893891486"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.297088 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a287def6-0542-42d7-bf64-dca21b2bd57b" (UID: "a287def6-0542-42d7-bf64-dca21b2bd57b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.298540 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a287def6-0542-42d7-bf64-dca21b2bd57b" (UID: "a287def6-0542-42d7-bf64-dca21b2bd57b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.328464 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.328496 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.328505 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a287def6-0542-42d7-bf64-dca21b2bd57b-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.328515 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.328524 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a16c447-44d2-4bba-ad99-aa5893891486-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.680869 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c585782f-9e4f-4495-9e68-a10aa5fc90b0","Type":"ContainerStarted","Data":"e0a50864dc489786754fbccae98a6c309296e3a27120f9b6a0664b037d585a1a"} Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.681560 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.692265 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ae048e02-6ff7-4fa8-81c0-57ab3c051662","Type":"ContainerStarted","Data":"0b8013b7c39790a42e5232ea4a286f5247802d339c5a85474163aa8d7de025e2"} Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.693795 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.695920 4792 generic.go:334] "Generic (PLEG): container finished" podID="9318ba4f-8979-46fa-8cb4-e1c12ee94e35" containerID="97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" exitCode=0 Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.695995 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c6dcb76d4-jdvrw" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.695987 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" event={"ID":"9318ba4f-8979-46fa-8cb4-e1c12ee94e35","Type":"ContainerDied","Data":"97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3"} Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.696055 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f6467b4f6-xl4lw" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.719960 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.719930857 podStartE2EDuration="37.719930857s" podCreationTimestamp="2026-03-19 17:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:12:40.708087132 +0000 UTC m=+1923.854144672" watchObservedRunningTime="2026-03-19 17:12:40.719930857 +0000 UTC m=+1923.865988397" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.738712 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=37.738696212 podStartE2EDuration="37.738696212s" podCreationTimestamp="2026-03-19 17:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:12:40.735231117 +0000 UTC m=+1923.881288657" watchObservedRunningTime="2026-03-19 17:12:40.738696212 +0000 UTC m=+1923.884753752" Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.772775 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c6dcb76d4-jdvrw"] Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.790085 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6c6dcb76d4-jdvrw"] Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.796882 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5f6467b4f6-xl4lw"] Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.808496 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5f6467b4f6-xl4lw"] Mar 19 17:12:40 crc kubenswrapper[4792]: I0319 17:12:40.991178 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-j4x4v"] Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.013604 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-j4x4v"] Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.067754 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-rcrgb"] Mar 19 17:12:41 crc kubenswrapper[4792]: E0319 17:12:41.068317 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a287def6-0542-42d7-bf64-dca21b2bd57b" containerName="heat-api" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.068341 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a287def6-0542-42d7-bf64-dca21b2bd57b" containerName="heat-api" Mar 19 17:12:41 crc kubenswrapper[4792]: E0319 17:12:41.068385 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a16c447-44d2-4bba-ad99-aa5893891486" containerName="heat-cfnapi" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.068391 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a16c447-44d2-4bba-ad99-aa5893891486" containerName="heat-cfnapi" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.068636 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a16c447-44d2-4bba-ad99-aa5893891486" containerName="heat-cfnapi" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.068658 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a287def6-0542-42d7-bf64-dca21b2bd57b" containerName="heat-api" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.069504 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.085322 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.098458 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rcrgb"] Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.258309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-combined-ca-bundle\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.258380 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-scripts\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.260630 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g29xw\" (UniqueName: \"kubernetes.io/projected/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-kube-api-access-g29xw\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.260709 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-config-data\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.363980 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-combined-ca-bundle\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.364035 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-scripts\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.364110 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g29xw\" (UniqueName: \"kubernetes.io/projected/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-kube-api-access-g29xw\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.364129 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-config-data\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.369997 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-scripts\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.370438 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-combined-ca-bundle\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.392124 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g29xw\" (UniqueName: \"kubernetes.io/projected/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-kube-api-access-g29xw\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.392627 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-config-data\") pod \"aodh-db-sync-rcrgb\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.430890 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.755811 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c2a4ef-0756-47fd-a30e-1af46f2d5bc8" path="/var/lib/kubelet/pods/76c2a4ef-0756-47fd-a30e-1af46f2d5bc8/volumes" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.756741 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a16c447-44d2-4bba-ad99-aa5893891486" path="/var/lib/kubelet/pods/7a16c447-44d2-4bba-ad99-aa5893891486/volumes" Mar 19 17:12:41 crc kubenswrapper[4792]: I0319 17:12:41.757431 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a287def6-0542-42d7-bf64-dca21b2bd57b" path="/var/lib/kubelet/pods/a287def6-0542-42d7-bf64-dca21b2bd57b/volumes" Mar 19 17:12:48 crc kubenswrapper[4792]: E0319 17:12:48.142136 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3 is running failed: container process not found" containerID="97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:12:48 crc kubenswrapper[4792]: E0319 17:12:48.144882 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3 is running failed: container process not found" containerID="97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:12:48 crc kubenswrapper[4792]: E0319 17:12:48.145054 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3 is running failed: container process not found" containerID="97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 19 17:12:48 crc kubenswrapper[4792]: E0319 17:12:48.145080 4792 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" podUID="9318ba4f-8979-46fa-8cb4-e1c12ee94e35" containerName="heat-engine" Mar 19 17:12:48 crc kubenswrapper[4792]: I0319 17:12:48.985306 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.013164 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data\") pod \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.013254 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mk2\" (UniqueName: \"kubernetes.io/projected/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-kube-api-access-x9mk2\") pod \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.013320 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-combined-ca-bundle\") pod \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.013502 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data-custom\") pod \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\" (UID: \"9318ba4f-8979-46fa-8cb4-e1c12ee94e35\") " Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.037565 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9318ba4f-8979-46fa-8cb4-e1c12ee94e35" (UID: "9318ba4f-8979-46fa-8cb4-e1c12ee94e35"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.037615 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-kube-api-access-x9mk2" (OuterVolumeSpecName: "kube-api-access-x9mk2") pod "9318ba4f-8979-46fa-8cb4-e1c12ee94e35" (UID: "9318ba4f-8979-46fa-8cb4-e1c12ee94e35"). InnerVolumeSpecName "kube-api-access-x9mk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.064968 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9318ba4f-8979-46fa-8cb4-e1c12ee94e35" (UID: "9318ba4f-8979-46fa-8cb4-e1c12ee94e35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.092209 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data" (OuterVolumeSpecName: "config-data") pod "9318ba4f-8979-46fa-8cb4-e1c12ee94e35" (UID: "9318ba4f-8979-46fa-8cb4-e1c12ee94e35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.122374 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.122404 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mk2\" (UniqueName: \"kubernetes.io/projected/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-kube-api-access-x9mk2\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.122415 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.122424 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9318ba4f-8979-46fa-8cb4-e1c12ee94e35-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.372368 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rcrgb"] Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.576574 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.809314 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" event={"ID":"9318ba4f-8979-46fa-8cb4-e1c12ee94e35","Type":"ContainerDied","Data":"bcb0d54937dfa5dce244670e9fba79a5684432838ffbd2f879950b32903afcf5"} Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.809336 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7bcd68ccb9-rjwmx" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.809406 4792 scope.go:117] "RemoveContainer" containerID="97b4856bd9f95f7356fe2f83b67eb88e9aac05e34d63bf835117038401ac25b3" Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.811717 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rcrgb" event={"ID":"2fa5959e-fb35-4b6e-95de-d7a87bf4479e","Type":"ContainerStarted","Data":"6072dcf7c7f812e469442508550e01e0de900a8cc36eb28382d4789fa4320527"} Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.814887 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" event={"ID":"50866ef3-6742-4a83-a766-2c075a8d45cb","Type":"ContainerStarted","Data":"05d8b391577956d1c5312024b7c0c0a8a7806f4d8859ffdef720a59b3245cd13"} Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.837144 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7bcd68ccb9-rjwmx"] Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.850656 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7bcd68ccb9-rjwmx"] Mar 19 17:12:49 crc kubenswrapper[4792]: I0319 17:12:49.870420 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" podStartSLOduration=1.7423151940000001 podStartE2EDuration="15.870399822s" podCreationTimestamp="2026-03-19 17:12:34 +0000 UTC" firstStartedPulling="2026-03-19 17:12:35.437167217 +0000 UTC m=+1918.583224767" lastFinishedPulling="2026-03-19 17:12:49.565251855 +0000 UTC m=+1932.711309395" observedRunningTime="2026-03-19 17:12:49.86340445 +0000 UTC m=+1933.009461990" watchObservedRunningTime="2026-03-19 17:12:49.870399822 +0000 UTC m=+1933.016457362" Mar 19 17:12:51 crc kubenswrapper[4792]: I0319 17:12:51.759712 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9318ba4f-8979-46fa-8cb4-e1c12ee94e35" path="/var/lib/kubelet/pods/9318ba4f-8979-46fa-8cb4-e1c12ee94e35/volumes" Mar 19 17:12:54 crc kubenswrapper[4792]: I0319 17:12:54.259011 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 19 17:12:54 crc kubenswrapper[4792]: I0319 17:12:54.322634 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:12:54 crc kubenswrapper[4792]: I0319 17:12:54.525069 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 19 17:12:55 crc kubenswrapper[4792]: I0319 17:12:55.937396 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rcrgb" event={"ID":"2fa5959e-fb35-4b6e-95de-d7a87bf4479e","Type":"ContainerStarted","Data":"d0847bc6693c17f9272bbcea2dd07bc513bad0510b6baedb34cdc5633b92d5e9"} Mar 19 17:12:55 crc kubenswrapper[4792]: I0319 17:12:55.965530 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-rcrgb" podStartSLOduration=8.941021832 podStartE2EDuration="14.965509523s" podCreationTimestamp="2026-03-19 17:12:41 +0000 UTC" firstStartedPulling="2026-03-19 17:12:49.367070325 +0000 UTC m=+1932.513127865" lastFinishedPulling="2026-03-19 17:12:55.391558016 +0000 UTC m=+1938.537615556" observedRunningTime="2026-03-19 17:12:55.95700542 +0000 UTC m=+1939.103062950" watchObservedRunningTime="2026-03-19 17:12:55.965509523 +0000 UTC m=+1939.111567053" Mar 19 17:12:58 crc kubenswrapper[4792]: I0319 17:12:58.930017 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="rabbitmq" containerID="cri-o://db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544" gracePeriod=604796 Mar 19 17:12:58 crc kubenswrapper[4792]: I0319 17:12:58.972075 4792 generic.go:334] "Generic (PLEG): container finished" podID="2fa5959e-fb35-4b6e-95de-d7a87bf4479e" containerID="d0847bc6693c17f9272bbcea2dd07bc513bad0510b6baedb34cdc5633b92d5e9" exitCode=0 Mar 19 17:12:58 crc kubenswrapper[4792]: I0319 17:12:58.972151 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rcrgb" event={"ID":"2fa5959e-fb35-4b6e-95de-d7a87bf4479e","Type":"ContainerDied","Data":"d0847bc6693c17f9272bbcea2dd07bc513bad0510b6baedb34cdc5633b92d5e9"} Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.537458 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.632764 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-combined-ca-bundle\") pod \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.632877 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-scripts\") pod \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.633011 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g29xw\" (UniqueName: \"kubernetes.io/projected/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-kube-api-access-g29xw\") pod \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.633183 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-config-data\") pod \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\" (UID: \"2fa5959e-fb35-4b6e-95de-d7a87bf4479e\") " Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.638570 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-scripts" (OuterVolumeSpecName: "scripts") pod "2fa5959e-fb35-4b6e-95de-d7a87bf4479e" (UID: "2fa5959e-fb35-4b6e-95de-d7a87bf4479e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.638794 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-kube-api-access-g29xw" (OuterVolumeSpecName: "kube-api-access-g29xw") pod "2fa5959e-fb35-4b6e-95de-d7a87bf4479e" (UID: "2fa5959e-fb35-4b6e-95de-d7a87bf4479e"). InnerVolumeSpecName "kube-api-access-g29xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.674698 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fa5959e-fb35-4b6e-95de-d7a87bf4479e" (UID: "2fa5959e-fb35-4b6e-95de-d7a87bf4479e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.676341 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-config-data" (OuterVolumeSpecName: "config-data") pod "2fa5959e-fb35-4b6e-95de-d7a87bf4479e" (UID: "2fa5959e-fb35-4b6e-95de-d7a87bf4479e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.736681 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.736713 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.736727 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:00.736738 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g29xw\" (UniqueName: \"kubernetes.io/projected/2fa5959e-fb35-4b6e-95de-d7a87bf4479e-kube-api-access-g29xw\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.055632 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rcrgb" event={"ID":"2fa5959e-fb35-4b6e-95de-d7a87bf4479e","Type":"ContainerDied","Data":"6072dcf7c7f812e469442508550e01e0de900a8cc36eb28382d4789fa4320527"} Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.055681 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6072dcf7c7f812e469442508550e01e0de900a8cc36eb28382d4789fa4320527" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.055684 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rcrgb" Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.104767 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.105211 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-api" containerID="cri-o://da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb" gracePeriod=30 Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.105786 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-listener" containerID="cri-o://221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3" gracePeriod=30 Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.105833 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-notifier" containerID="cri-o://1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae" gracePeriod=30 Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.105889 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-evaluator" containerID="cri-o://89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c" gracePeriod=30 Mar 19 17:13:01 crc kubenswrapper[4792]: I0319 17:13:01.735308 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.070391 4792 generic.go:334] "Generic (PLEG): container finished" podID="50866ef3-6742-4a83-a766-2c075a8d45cb" containerID="05d8b391577956d1c5312024b7c0c0a8a7806f4d8859ffdef720a59b3245cd13" exitCode=0 Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.070464 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" event={"ID":"50866ef3-6742-4a83-a766-2c075a8d45cb","Type":"ContainerDied","Data":"05d8b391577956d1c5312024b7c0c0a8a7806f4d8859ffdef720a59b3245cd13"} Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.074977 4792 generic.go:334] "Generic (PLEG): container finished" podID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerID="89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c" exitCode=0 Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.075016 4792 generic.go:334] "Generic (PLEG): container finished" podID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerID="da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb" exitCode=0 Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.075043 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerDied","Data":"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c"} Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.075071 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerDied","Data":"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb"} Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.502620 4792 scope.go:117] "RemoveContainer" containerID="8634d390ea5e9b4e989d3ae467efae8b818c212c7d2d75bb7fa9478bd172fcc9" Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.536578 4792 scope.go:117] "RemoveContainer" containerID="ff5e5c192fab00a3f88c460818fa989e6d942376d4d422d4594566a5ab19c012" Mar 19 17:13:02 crc kubenswrapper[4792]: I0319 17:13:02.598171 4792 scope.go:117] "RemoveContainer" containerID="7456c8a80c0df7e4e6a7dddbd799e9dec092585ecf3b85abc501acdfcec02938" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.625819 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.706064 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-repo-setup-combined-ca-bundle\") pod \"50866ef3-6742-4a83-a766-2c075a8d45cb\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.706329 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntlwt\" (UniqueName: \"kubernetes.io/projected/50866ef3-6742-4a83-a766-2c075a8d45cb-kube-api-access-ntlwt\") pod \"50866ef3-6742-4a83-a766-2c075a8d45cb\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.706493 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-ssh-key-openstack-edpm-ipam\") pod \"50866ef3-6742-4a83-a766-2c075a8d45cb\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.706610 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-inventory\") pod \"50866ef3-6742-4a83-a766-2c075a8d45cb\" (UID: \"50866ef3-6742-4a83-a766-2c075a8d45cb\") " Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.712608 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50866ef3-6742-4a83-a766-2c075a8d45cb-kube-api-access-ntlwt" (OuterVolumeSpecName: "kube-api-access-ntlwt") pod "50866ef3-6742-4a83-a766-2c075a8d45cb" (UID: "50866ef3-6742-4a83-a766-2c075a8d45cb"). InnerVolumeSpecName "kube-api-access-ntlwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.713559 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "50866ef3-6742-4a83-a766-2c075a8d45cb" (UID: "50866ef3-6742-4a83-a766-2c075a8d45cb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.738283 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50866ef3-6742-4a83-a766-2c075a8d45cb" (UID: "50866ef3-6742-4a83-a766-2c075a8d45cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.760137 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-inventory" (OuterVolumeSpecName: "inventory") pod "50866ef3-6742-4a83-a766-2c075a8d45cb" (UID: "50866ef3-6742-4a83-a766-2c075a8d45cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.809866 4792 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.809906 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntlwt\" (UniqueName: \"kubernetes.io/projected/50866ef3-6742-4a83-a766-2c075a8d45cb-kube-api-access-ntlwt\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.809920 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:03 crc kubenswrapper[4792]: I0319 17:13:03.809932 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50866ef3-6742-4a83-a766-2c075a8d45cb-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.097698 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.099039 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk" event={"ID":"50866ef3-6742-4a83-a766-2c075a8d45cb","Type":"ContainerDied","Data":"7d9dbfaee6e5f329bcf7ca3b3cb89d726a0152116124ec5d151922b21a2b5253"} Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.099080 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9dbfaee6e5f329bcf7ca3b3cb89d726a0152116124ec5d151922b21a2b5253" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.187684 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4"] Mar 19 17:13:04 crc kubenswrapper[4792]: E0319 17:13:04.188380 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9318ba4f-8979-46fa-8cb4-e1c12ee94e35" containerName="heat-engine" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.188401 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9318ba4f-8979-46fa-8cb4-e1c12ee94e35" containerName="heat-engine" Mar 19 17:13:04 crc kubenswrapper[4792]: E0319 17:13:04.188433 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50866ef3-6742-4a83-a766-2c075a8d45cb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.188443 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="50866ef3-6742-4a83-a766-2c075a8d45cb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 19 17:13:04 crc kubenswrapper[4792]: E0319 17:13:04.188477 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa5959e-fb35-4b6e-95de-d7a87bf4479e" containerName="aodh-db-sync" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.188486 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa5959e-fb35-4b6e-95de-d7a87bf4479e" containerName="aodh-db-sync" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.188802 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9318ba4f-8979-46fa-8cb4-e1c12ee94e35" containerName="heat-engine" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.188826 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="50866ef3-6742-4a83-a766-2c075a8d45cb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.188855 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa5959e-fb35-4b6e-95de-d7a87bf4479e" containerName="aodh-db-sync" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.189899 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.193282 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.193473 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.193503 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.194539 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.202680 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4"] Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.321717 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4zff\" (UniqueName: \"kubernetes.io/projected/e643e89e-4c37-4a1d-a4ee-9a47fab99015-kube-api-access-l4zff\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.321827 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.322143 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.424549 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.425303 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.425437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4zff\" (UniqueName: \"kubernetes.io/projected/e643e89e-4c37-4a1d-a4ee-9a47fab99015-kube-api-access-l4zff\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.431730 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.443718 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.463095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4zff\" (UniqueName: \"kubernetes.io/projected/e643e89e-4c37-4a1d-a4ee-9a47fab99015-kube-api-access-l4zff\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zmbb4\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:04 crc kubenswrapper[4792]: I0319 17:13:04.511520 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.191233 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4"] Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.639881 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.656419 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.794682 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-combined-ca-bundle\") pod \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.795111 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-scripts\") pod \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.795141 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-internal-tls-certs\") pod \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.795163 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-pod-info\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.795211 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x8h6\" (UniqueName: \"kubernetes.io/projected/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-kube-api-access-9x8h6\") pod \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.795267 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-confd\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.795284 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-config-data\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.795347 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-tls\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.796618 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.796722 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-plugins-conf\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.796767 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-public-tls-certs\") pod \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.796788 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-server-conf\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.796816 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-erlang-cookie-secret\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.796878 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zkdp\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-kube-api-access-5zkdp\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.796954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-erlang-cookie\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.796984 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-plugins\") pod \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\" (UID: \"3daeb97c-0c99-4d2c-8d07-5b168bf010d9\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.797015 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-config-data\") pod \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\" (UID: \"e89f502e-a41f-45ca-89ef-93a4f4ac4f62\") " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.800645 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.800665 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.805465 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-kube-api-access-5zkdp" (OuterVolumeSpecName: "kube-api-access-5zkdp") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "kube-api-access-5zkdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.813677 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.815571 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-kube-api-access-9x8h6" (OuterVolumeSpecName: "kube-api-access-9x8h6") pod "e89f502e-a41f-45ca-89ef-93a4f4ac4f62" (UID: "e89f502e-a41f-45ca-89ef-93a4f4ac4f62"). InnerVolumeSpecName "kube-api-access-9x8h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.815668 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-pod-info" (OuterVolumeSpecName: "pod-info") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.816116 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.822190 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-scripts" (OuterVolumeSpecName: "scripts") pod "e89f502e-a41f-45ca-89ef-93a4f4ac4f62" (UID: "e89f502e-a41f-45ca-89ef-93a4f4ac4f62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.825588 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.851980 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4" (OuterVolumeSpecName: "persistence") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.897000 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-config-data" (OuterVolumeSpecName: "config-data") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.899569 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-server-conf" (OuterVolumeSpecName: "server-conf") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.914330 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918058 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918171 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") on node \"crc\" " Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918250 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918319 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-server-conf\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918406 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918469 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zkdp\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-kube-api-access-5zkdp\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918533 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918594 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918691 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-scripts\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918756 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-pod-info\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.918816 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x8h6\" (UniqueName: \"kubernetes.io/projected/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-kube-api-access-9x8h6\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.968947 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.969514 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4") on node "crc" Mar 19 17:13:05 crc kubenswrapper[4792]: I0319 17:13:05.982990 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e89f502e-a41f-45ca-89ef-93a4f4ac4f62" (UID: "e89f502e-a41f-45ca-89ef-93a4f4ac4f62"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.009432 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-config-data" (OuterVolumeSpecName: "config-data") pod "e89f502e-a41f-45ca-89ef-93a4f4ac4f62" (UID: "e89f502e-a41f-45ca-89ef-93a4f4ac4f62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.015807 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e89f502e-a41f-45ca-89ef-93a4f4ac4f62" (UID: "e89f502e-a41f-45ca-89ef-93a4f4ac4f62"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.022391 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.022426 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.022441 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.022452 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.036052 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e89f502e-a41f-45ca-89ef-93a4f4ac4f62" (UID: "e89f502e-a41f-45ca-89ef-93a4f4ac4f62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.104887 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3daeb97c-0c99-4d2c-8d07-5b168bf010d9" (UID: "3daeb97c-0c99-4d2c-8d07-5b168bf010d9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.122013 4792 generic.go:334] "Generic (PLEG): container finished" podID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerID="221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3" exitCode=0 Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.122056 4792 generic.go:334] "Generic (PLEG): container finished" podID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerID="1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae" exitCode=0 Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.122168 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.122600 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerDied","Data":"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3"} Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.122654 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerDied","Data":"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae"} Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.122665 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89f502e-a41f-45ca-89ef-93a4f4ac4f62","Type":"ContainerDied","Data":"01c3117975fbaa47f8923a88fcee93aad4fde4b4434027baea1ae86323afd312"} Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.122682 4792 scope.go:117] "RemoveContainer" containerID="221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.125107 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f502e-a41f-45ca-89ef-93a4f4ac4f62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.125136 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3daeb97c-0c99-4d2c-8d07-5b168bf010d9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.126268 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" event={"ID":"e643e89e-4c37-4a1d-a4ee-9a47fab99015","Type":"ContainerStarted","Data":"207c40998643cb7e2c55435417ef72d8fe68bafcba643dd29d2a0aa5598d8480"} Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.134705 4792 generic.go:334] "Generic (PLEG): container finished" podID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerID="db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544" exitCode=0 Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.134813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3daeb97c-0c99-4d2c-8d07-5b168bf010d9","Type":"ContainerDied","Data":"db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544"} Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.134898 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3daeb97c-0c99-4d2c-8d07-5b168bf010d9","Type":"ContainerDied","Data":"ff55647277071a20ec2125b111d5df166abfd965af5819eefa5f08bc4bfc47ca"} Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.135006 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.177746 4792 scope.go:117] "RemoveContainer" containerID="1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.182051 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.226677 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.241073 4792 scope.go:117] "RemoveContainer" containerID="89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.257112 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.277035 4792 scope.go:117] "RemoveContainer" containerID="da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.285810 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.286645 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-notifier" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.286660 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-notifier" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.286681 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-evaluator" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.286687 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-evaluator" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.286699 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-listener" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.286705 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-listener" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.286719 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-api" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.286725 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-api" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.286745 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="rabbitmq" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.286750 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="rabbitmq" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.286763 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="setup-container" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.286769 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="setup-container" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.287026 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" containerName="rabbitmq" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.287038 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-evaluator" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.287055 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-notifier" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.287064 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-api" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.287080 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" containerName="aodh-listener" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.289532 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.299520 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.299555 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.299605 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gdn5p" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.299530 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.299530 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.319190 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.335919 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.342323 4792 scope.go:117] "RemoveContainer" containerID="221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.342886 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3\": container with ID starting with 221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3 not found: ID does not exist" containerID="221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.342934 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3"} err="failed to get container status \"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3\": rpc error: code = NotFound desc = could not find container \"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3\": container with ID starting with 221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3 not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.342962 4792 scope.go:117] "RemoveContainer" containerID="1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.343278 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae\": container with ID starting with 1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae not found: ID does not exist" containerID="1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.343299 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae"} err="failed to get container status \"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae\": rpc error: code = NotFound desc = could not find container \"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae\": container with ID starting with 1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.343312 4792 scope.go:117] "RemoveContainer" containerID="89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.343602 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c\": container with ID starting with 89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c not found: ID does not exist" containerID="89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.343621 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c"} err="failed to get container status \"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c\": rpc error: code = NotFound desc = could not find container \"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c\": container with ID starting with 89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.343636 4792 scope.go:117] "RemoveContainer" containerID="da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.343880 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb\": container with ID starting with da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb not found: ID does not exist" containerID="da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.343904 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb"} err="failed to get container status \"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb\": rpc error: code = NotFound desc = could not find container \"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb\": container with ID starting with da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.343915 4792 scope.go:117] "RemoveContainer" containerID="221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.344156 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3"} err="failed to get container status \"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3\": rpc error: code = NotFound desc = could not find container \"221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3\": container with ID starting with 221fc336c11e8c187381752d564f39b23ec9de9fc8bc8925bd7ca1b9749febf3 not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.344175 4792 scope.go:117] "RemoveContainer" containerID="1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.344341 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae"} err="failed to get container status \"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae\": rpc error: code = NotFound desc = could not find container \"1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae\": container with ID starting with 1537347c5770a07646e6a5148739550990a5d3ea810ccf9d5e4e02e11fe126ae not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.344360 4792 scope.go:117] "RemoveContainer" containerID="89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.344610 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c"} err="failed to get container status \"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c\": rpc error: code = NotFound desc = could not find container \"89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c\": container with ID starting with 89dda334ec09f901904e7a5dae58a2a7924c604a498da7e49f18159d8a455a1c not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.344632 4792 scope.go:117] "RemoveContainer" containerID="da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.345060 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb"} err="failed to get container status \"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb\": rpc error: code = NotFound desc = could not find container \"da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb\": container with ID starting with da322c68c0792673b25deaec9ca1d0ff1e1c12bdf09db4ad5b87a1de779525bb not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.345106 4792 scope.go:117] "RemoveContainer" containerID="db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.351669 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.353762 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.378613 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.382204 4792 scope.go:117] "RemoveContainer" containerID="a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.414149 4792 scope.go:117] "RemoveContainer" containerID="db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.417918 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544\": container with ID starting with db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544 not found: ID does not exist" containerID="db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.417961 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544"} err="failed to get container status \"db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544\": rpc error: code = NotFound desc = could not find container \"db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544\": container with ID starting with db7ad8d21551962e94923aefc6a06046cd12feb101f5ef2e397ae1908336a544 not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.417989 4792 scope.go:117] "RemoveContainer" containerID="a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec" Mar 19 17:13:06 crc kubenswrapper[4792]: E0319 17:13:06.418724 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec\": container with ID starting with a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec not found: ID does not exist" containerID="a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.418773 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec"} err="failed to get container status \"a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec\": rpc error: code = NotFound desc = could not find container \"a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec\": container with ID starting with a86e1f23cdfdc7ad3d7f89909099520fde89fd62d3889cb85001dca7007f29ec not found: ID does not exist" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-scripts\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439202 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439226 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439257 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439276 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9g2x\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-kube-api-access-w9g2x\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439300 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-config-data\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439321 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47acf5ef-2d85-427c-81e6-8b8707505206-pod-info\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439344 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-config-data\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439397 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-public-tls-certs\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439430 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439630 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjlp\" (UniqueName: \"kubernetes.io/projected/5d8206f4-d2ae-4db9-9893-091e0f602d74-kube-api-access-lpjlp\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439656 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439706 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47acf5ef-2d85-427c-81e6-8b8707505206-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439745 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-server-conf\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439760 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439784 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-internal-tls-certs\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.439805 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542091 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-public-tls-certs\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542162 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542200 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjlp\" (UniqueName: \"kubernetes.io/projected/5d8206f4-d2ae-4db9-9893-091e0f602d74-kube-api-access-lpjlp\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542224 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47acf5ef-2d85-427c-81e6-8b8707505206-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542305 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-server-conf\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542327 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542358 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-internal-tls-certs\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542409 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542447 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-scripts\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542523 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542548 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542583 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9g2x\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-kube-api-access-w9g2x\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542636 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-config-data\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542669 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47acf5ef-2d85-427c-81e6-8b8707505206-pod-info\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.542702 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-config-data\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.543873 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.543981 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.548917 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-public-tls-certs\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.549015 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.551357 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.551709 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47acf5ef-2d85-427c-81e6-8b8707505206-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.551914 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-internal-tls-certs\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.552330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-config-data\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.552458 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8206f4-d2ae-4db9-9893-091e0f602d74-scripts\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.553023 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.566802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.567513 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-config-data\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.570323 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47acf5ef-2d85-427c-81e6-8b8707505206-pod-info\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.572671 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47acf5ef-2d85-427c-81e6-8b8707505206-server-conf\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.573025 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.573053 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8ca26b84d347a31d255fc230498e1b3b968f4e7b0bb0d1644032f336cb0edaa8/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.575832 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9g2x\" (UniqueName: \"kubernetes.io/projected/47acf5ef-2d85-427c-81e6-8b8707505206-kube-api-access-w9g2x\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.582054 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjlp\" (UniqueName: \"kubernetes.io/projected/5d8206f4-d2ae-4db9-9893-091e0f602d74-kube-api-access-lpjlp\") pod \"aodh-0\" (UID: \"5d8206f4-d2ae-4db9-9893-091e0f602d74\") " pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.625787 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.690349 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79aeb763-3cda-4ba5-8ef9-648eb2d446b4\") pod \"rabbitmq-server-1\" (UID: \"47acf5ef-2d85-427c-81e6-8b8707505206\") " pod="openstack/rabbitmq-server-1" Mar 19 17:13:06 crc kubenswrapper[4792]: I0319 17:13:06.977218 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 19 17:13:07 crc kubenswrapper[4792]: I0319 17:13:07.153526 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" event={"ID":"e643e89e-4c37-4a1d-a4ee-9a47fab99015","Type":"ContainerStarted","Data":"8003cfaec839e4c45213da606500768211e4f9a1c3824639a9fa6ae1b81e066c"} Mar 19 17:13:07 crc kubenswrapper[4792]: I0319 17:13:07.162165 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 19 17:13:07 crc kubenswrapper[4792]: W0319 17:13:07.169318 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d8206f4_d2ae_4db9_9893_091e0f602d74.slice/crio-41392dec42b9f06d0d0b9a287711a29df6206ebe516c562f251b424001ff2b06 WatchSource:0}: Error finding container 41392dec42b9f06d0d0b9a287711a29df6206ebe516c562f251b424001ff2b06: Status 404 returned error can't find the container with id 41392dec42b9f06d0d0b9a287711a29df6206ebe516c562f251b424001ff2b06 Mar 19 17:13:07 crc kubenswrapper[4792]: I0319 17:13:07.190107 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" podStartSLOduration=2.760927704 podStartE2EDuration="3.190055474s" podCreationTimestamp="2026-03-19 17:13:04 +0000 UTC" firstStartedPulling="2026-03-19 17:13:05.196023495 +0000 UTC m=+1948.342081035" lastFinishedPulling="2026-03-19 17:13:05.625151265 +0000 UTC m=+1948.771208805" observedRunningTime="2026-03-19 17:13:07.18081137 +0000 UTC m=+1950.326868920" watchObservedRunningTime="2026-03-19 17:13:07.190055474 +0000 UTC m=+1950.336113014" Mar 19 17:13:07 crc kubenswrapper[4792]: I0319 17:13:07.515503 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 19 17:13:07 crc kubenswrapper[4792]: I0319 17:13:07.790020 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3daeb97c-0c99-4d2c-8d07-5b168bf010d9" path="/var/lib/kubelet/pods/3daeb97c-0c99-4d2c-8d07-5b168bf010d9/volumes" Mar 19 17:13:07 crc kubenswrapper[4792]: I0319 17:13:07.792498 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89f502e-a41f-45ca-89ef-93a4f4ac4f62" path="/var/lib/kubelet/pods/e89f502e-a41f-45ca-89ef-93a4f4ac4f62/volumes" Mar 19 17:13:08 crc kubenswrapper[4792]: I0319 17:13:08.172914 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5d8206f4-d2ae-4db9-9893-091e0f602d74","Type":"ContainerStarted","Data":"dfbc2a9774cf862b64b71a06a9d9684a8347222d2fc18eb54e3473230aee1e60"} Mar 19 17:13:08 crc kubenswrapper[4792]: I0319 17:13:08.173209 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5d8206f4-d2ae-4db9-9893-091e0f602d74","Type":"ContainerStarted","Data":"41392dec42b9f06d0d0b9a287711a29df6206ebe516c562f251b424001ff2b06"} Mar 19 17:13:08 crc kubenswrapper[4792]: I0319 17:13:08.175150 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"47acf5ef-2d85-427c-81e6-8b8707505206","Type":"ContainerStarted","Data":"111ee3d39bcc11f1e91921f5e9c058259c26a9d42c8a48d72c15c0035794f7c9"} Mar 19 17:13:09 crc kubenswrapper[4792]: I0319 17:13:09.192520 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5d8206f4-d2ae-4db9-9893-091e0f602d74","Type":"ContainerStarted","Data":"11481c356870af4a3147b3a2642e13247349913044d2be6ac2712abc6e044710"} Mar 19 17:13:09 crc kubenswrapper[4792]: I0319 17:13:09.200148 4792 generic.go:334] "Generic (PLEG): container finished" podID="e643e89e-4c37-4a1d-a4ee-9a47fab99015" containerID="8003cfaec839e4c45213da606500768211e4f9a1c3824639a9fa6ae1b81e066c" exitCode=0 Mar 19 17:13:09 crc kubenswrapper[4792]: I0319 17:13:09.200201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" event={"ID":"e643e89e-4c37-4a1d-a4ee-9a47fab99015","Type":"ContainerDied","Data":"8003cfaec839e4c45213da606500768211e4f9a1c3824639a9fa6ae1b81e066c"} Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.212834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5d8206f4-d2ae-4db9-9893-091e0f602d74","Type":"ContainerStarted","Data":"e236e5e7c796fe59a9f452dfd366ec50d23717f8f3e23ca043eadeea096c04c4"} Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.215130 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"47acf5ef-2d85-427c-81e6-8b8707505206","Type":"ContainerStarted","Data":"bc3a302a96a8681b27e331225a04573e27ce55460da6fae37749ff65c1b5dfa4"} Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.825378 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.867200 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-ssh-key-openstack-edpm-ipam\") pod \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.867330 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4zff\" (UniqueName: \"kubernetes.io/projected/e643e89e-4c37-4a1d-a4ee-9a47fab99015-kube-api-access-l4zff\") pod \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.867426 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-inventory\") pod \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\" (UID: \"e643e89e-4c37-4a1d-a4ee-9a47fab99015\") " Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.875530 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e643e89e-4c37-4a1d-a4ee-9a47fab99015-kube-api-access-l4zff" (OuterVolumeSpecName: "kube-api-access-l4zff") pod "e643e89e-4c37-4a1d-a4ee-9a47fab99015" (UID: "e643e89e-4c37-4a1d-a4ee-9a47fab99015"). InnerVolumeSpecName "kube-api-access-l4zff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.908508 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e643e89e-4c37-4a1d-a4ee-9a47fab99015" (UID: "e643e89e-4c37-4a1d-a4ee-9a47fab99015"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.911721 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-inventory" (OuterVolumeSpecName: "inventory") pod "e643e89e-4c37-4a1d-a4ee-9a47fab99015" (UID: "e643e89e-4c37-4a1d-a4ee-9a47fab99015"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.972041 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.972075 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4zff\" (UniqueName: \"kubernetes.io/projected/e643e89e-4c37-4a1d-a4ee-9a47fab99015-kube-api-access-l4zff\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:10 crc kubenswrapper[4792]: I0319 17:13:10.972087 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e643e89e-4c37-4a1d-a4ee-9a47fab99015-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.241082 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.241083 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zmbb4" event={"ID":"e643e89e-4c37-4a1d-a4ee-9a47fab99015","Type":"ContainerDied","Data":"207c40998643cb7e2c55435417ef72d8fe68bafcba643dd29d2a0aa5598d8480"} Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.241140 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="207c40998643cb7e2c55435417ef72d8fe68bafcba643dd29d2a0aa5598d8480" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.312272 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc"] Mar 19 17:13:11 crc kubenswrapper[4792]: E0319 17:13:11.313262 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e643e89e-4c37-4a1d-a4ee-9a47fab99015" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.313279 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e643e89e-4c37-4a1d-a4ee-9a47fab99015" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.313594 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e643e89e-4c37-4a1d-a4ee-9a47fab99015" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.314695 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.318778 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.319187 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.319644 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.320610 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.327194 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc"] Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.484113 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9d92\" (UniqueName: \"kubernetes.io/projected/9a911839-8c9b-43da-9ef6-eed89833426e-kube-api-access-s9d92\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.484212 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.484394 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.484438 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.585807 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.585863 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.585999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9d92\" (UniqueName: \"kubernetes.io/projected/9a911839-8c9b-43da-9ef6-eed89833426e-kube-api-access-s9d92\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.586030 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.589801 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.591590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.591957 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.609656 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9d92\" (UniqueName: \"kubernetes.io/projected/9a911839-8c9b-43da-9ef6-eed89833426e-kube-api-access-s9d92\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:11 crc kubenswrapper[4792]: I0319 17:13:11.794369 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:13:12 crc kubenswrapper[4792]: I0319 17:13:12.254849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5d8206f4-d2ae-4db9-9893-091e0f602d74","Type":"ContainerStarted","Data":"b2a44894b7fd961e6f07f156abaa0d7182a8a887310ff126b3cc6c1fc16a3558"} Mar 19 17:13:12 crc kubenswrapper[4792]: I0319 17:13:12.285951 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.171622541 podStartE2EDuration="6.285928585s" podCreationTimestamp="2026-03-19 17:13:06 +0000 UTC" firstStartedPulling="2026-03-19 17:13:07.171351941 +0000 UTC m=+1950.317409491" lastFinishedPulling="2026-03-19 17:13:11.285657995 +0000 UTC m=+1954.431715535" observedRunningTime="2026-03-19 17:13:12.277642267 +0000 UTC m=+1955.423699807" watchObservedRunningTime="2026-03-19 17:13:12.285928585 +0000 UTC m=+1955.431986135" Mar 19 17:13:12 crc kubenswrapper[4792]: I0319 17:13:12.336036 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc"] Mar 19 17:13:12 crc kubenswrapper[4792]: W0319 17:13:12.336129 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a911839_8c9b_43da_9ef6_eed89833426e.slice/crio-b7077a67eb740188445f72a1a33dee5963597abf689b05b4dd2c0134594800d5 WatchSource:0}: Error finding container b7077a67eb740188445f72a1a33dee5963597abf689b05b4dd2c0134594800d5: Status 404 returned error can't find the container with id b7077a67eb740188445f72a1a33dee5963597abf689b05b4dd2c0134594800d5 Mar 19 17:13:13 crc kubenswrapper[4792]: I0319 17:13:13.275603 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" event={"ID":"9a911839-8c9b-43da-9ef6-eed89833426e","Type":"ContainerStarted","Data":"b7077a67eb740188445f72a1a33dee5963597abf689b05b4dd2c0134594800d5"} Mar 19 17:13:14 crc kubenswrapper[4792]: I0319 17:13:14.289546 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" event={"ID":"9a911839-8c9b-43da-9ef6-eed89833426e","Type":"ContainerStarted","Data":"a7db00d01d42ab4943e0bd3f49f1be252108af28b71b14e07fa3735fdebad16b"} Mar 19 17:13:14 crc kubenswrapper[4792]: I0319 17:13:14.312551 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" podStartSLOduration=2.684205559 podStartE2EDuration="3.312537298s" podCreationTimestamp="2026-03-19 17:13:11 +0000 UTC" firstStartedPulling="2026-03-19 17:13:12.338166648 +0000 UTC m=+1955.484224188" lastFinishedPulling="2026-03-19 17:13:12.966498387 +0000 UTC m=+1956.112555927" observedRunningTime="2026-03-19 17:13:14.308808675 +0000 UTC m=+1957.454866215" watchObservedRunningTime="2026-03-19 17:13:14.312537298 +0000 UTC m=+1957.458594838" Mar 19 17:13:41 crc kubenswrapper[4792]: I0319 17:13:41.600992 4792 generic.go:334] "Generic (PLEG): container finished" podID="47acf5ef-2d85-427c-81e6-8b8707505206" containerID="bc3a302a96a8681b27e331225a04573e27ce55460da6fae37749ff65c1b5dfa4" exitCode=0 Mar 19 17:13:41 crc kubenswrapper[4792]: I0319 17:13:41.601089 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"47acf5ef-2d85-427c-81e6-8b8707505206","Type":"ContainerDied","Data":"bc3a302a96a8681b27e331225a04573e27ce55460da6fae37749ff65c1b5dfa4"} Mar 19 17:13:42 crc kubenswrapper[4792]: I0319 17:13:42.614308 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"47acf5ef-2d85-427c-81e6-8b8707505206","Type":"ContainerStarted","Data":"b9409c80bc9daf38c7e327b56dbc915475a91853bbc70572aad398532235e4e8"} Mar 19 17:13:42 crc kubenswrapper[4792]: I0319 17:13:42.614999 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 19 17:13:42 crc kubenswrapper[4792]: I0319 17:13:42.643939 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=36.64390497 podStartE2EDuration="36.64390497s" podCreationTimestamp="2026-03-19 17:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:13:42.636609019 +0000 UTC m=+1985.782666569" watchObservedRunningTime="2026-03-19 17:13:42.64390497 +0000 UTC m=+1985.789962510" Mar 19 17:13:56 crc kubenswrapper[4792]: I0319 17:13:56.980897 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 19 17:13:57 crc kubenswrapper[4792]: I0319 17:13:57.055576 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.166231 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565674-s2pph"] Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.170147 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565674-s2pph" Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.174957 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.175159 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.183791 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.192684 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565674-s2pph"] Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.322385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767m8\" (UniqueName: \"kubernetes.io/projected/6f4785ef-7e44-4cbc-8d9c-d96670a13000-kube-api-access-767m8\") pod \"auto-csr-approver-29565674-s2pph\" (UID: \"6f4785ef-7e44-4cbc-8d9c-d96670a13000\") " pod="openshift-infra/auto-csr-approver-29565674-s2pph" Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.435448 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767m8\" (UniqueName: \"kubernetes.io/projected/6f4785ef-7e44-4cbc-8d9c-d96670a13000-kube-api-access-767m8\") pod \"auto-csr-approver-29565674-s2pph\" (UID: \"6f4785ef-7e44-4cbc-8d9c-d96670a13000\") " pod="openshift-infra/auto-csr-approver-29565674-s2pph" Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.463994 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767m8\" (UniqueName: \"kubernetes.io/projected/6f4785ef-7e44-4cbc-8d9c-d96670a13000-kube-api-access-767m8\") pod \"auto-csr-approver-29565674-s2pph\" (UID: \"6f4785ef-7e44-4cbc-8d9c-d96670a13000\") " pod="openshift-infra/auto-csr-approver-29565674-s2pph" Mar 19 17:14:00 crc kubenswrapper[4792]: I0319 17:14:00.834581 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565674-s2pph" Mar 19 17:14:01 crc kubenswrapper[4792]: I0319 17:14:01.055418 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ae950307-1857-4a46-ab98-55843387f128" containerName="rabbitmq" containerID="cri-o://00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5" gracePeriod=604797 Mar 19 17:14:01 crc kubenswrapper[4792]: I0319 17:14:01.356070 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565674-s2pph"] Mar 19 17:14:01 crc kubenswrapper[4792]: I0319 17:14:01.670270 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ae950307-1857-4a46-ab98-55843387f128" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Mar 19 17:14:01 crc kubenswrapper[4792]: I0319 17:14:01.867769 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565674-s2pph" event={"ID":"6f4785ef-7e44-4cbc-8d9c-d96670a13000","Type":"ContainerStarted","Data":"9553457a15ebcbfba6218db470ef63aa056f27753825b03cdc6152252db83409"} Mar 19 17:14:02 crc kubenswrapper[4792]: I0319 17:14:02.954609 4792 scope.go:117] "RemoveContainer" containerID="cdab2a29e594bc9da55ff2d3e3cbed2d7331b9e2287aff1b5bfd36d28b40af03" Mar 19 17:14:04 crc kubenswrapper[4792]: I0319 17:14:04.909706 4792 generic.go:334] "Generic (PLEG): container finished" podID="6f4785ef-7e44-4cbc-8d9c-d96670a13000" containerID="6ca3498d6b52c51c42ca0813b56546c94d3ed426e57e93a87b6b6a890ecc975e" exitCode=0 Mar 19 17:14:04 crc kubenswrapper[4792]: I0319 17:14:04.909804 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565674-s2pph" event={"ID":"6f4785ef-7e44-4cbc-8d9c-d96670a13000","Type":"ContainerDied","Data":"6ca3498d6b52c51c42ca0813b56546c94d3ed426e57e93a87b6b6a890ecc975e"} Mar 19 17:14:06 crc kubenswrapper[4792]: I0319 17:14:06.331421 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565674-s2pph" Mar 19 17:14:06 crc kubenswrapper[4792]: I0319 17:14:06.502990 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-767m8\" (UniqueName: \"kubernetes.io/projected/6f4785ef-7e44-4cbc-8d9c-d96670a13000-kube-api-access-767m8\") pod \"6f4785ef-7e44-4cbc-8d9c-d96670a13000\" (UID: \"6f4785ef-7e44-4cbc-8d9c-d96670a13000\") " Mar 19 17:14:06 crc kubenswrapper[4792]: I0319 17:14:06.508283 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4785ef-7e44-4cbc-8d9c-d96670a13000-kube-api-access-767m8" (OuterVolumeSpecName: "kube-api-access-767m8") pod "6f4785ef-7e44-4cbc-8d9c-d96670a13000" (UID: "6f4785ef-7e44-4cbc-8d9c-d96670a13000"). InnerVolumeSpecName "kube-api-access-767m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:14:06 crc kubenswrapper[4792]: I0319 17:14:06.607803 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-767m8\" (UniqueName: \"kubernetes.io/projected/6f4785ef-7e44-4cbc-8d9c-d96670a13000-kube-api-access-767m8\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:06 crc kubenswrapper[4792]: I0319 17:14:06.934099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565674-s2pph" event={"ID":"6f4785ef-7e44-4cbc-8d9c-d96670a13000","Type":"ContainerDied","Data":"9553457a15ebcbfba6218db470ef63aa056f27753825b03cdc6152252db83409"} Mar 19 17:14:06 crc kubenswrapper[4792]: I0319 17:14:06.934612 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9553457a15ebcbfba6218db470ef63aa056f27753825b03cdc6152252db83409" Mar 19 17:14:06 crc kubenswrapper[4792]: I0319 17:14:06.934169 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565674-s2pph" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.406699 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565668-qtbgd"] Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.417570 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565668-qtbgd"] Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.726484 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.764287 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea79aa54-fee6-4c52-a337-2a7e3a3da9ca" path="/var/lib/kubelet/pods/ea79aa54-fee6-4c52-a337-2a7e3a3da9ca/volumes" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.837297 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-confd\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.837354 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-plugins-conf\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.837465 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq8kk\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-kube-api-access-fq8kk\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.837516 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-config-data\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.837579 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-erlang-cookie\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.837737 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-tls\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.837779 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-server-conf\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.838538 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.838582 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-plugins\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.838639 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae950307-1857-4a46-ab98-55843387f128-pod-info\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.838687 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae950307-1857-4a46-ab98-55843387f128-erlang-cookie-secret\") pod \"ae950307-1857-4a46-ab98-55843387f128\" (UID: \"ae950307-1857-4a46-ab98-55843387f128\") " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.853713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.855159 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae950307-1857-4a46-ab98-55843387f128-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.857463 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.859130 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.861368 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-kube-api-access-fq8kk" (OuterVolumeSpecName: "kube-api-access-fq8kk") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "kube-api-access-fq8kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.861948 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.863626 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ae950307-1857-4a46-ab98-55843387f128-pod-info" (OuterVolumeSpecName: "pod-info") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.931150 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-config-data" (OuterVolumeSpecName: "config-data") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.934198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e" (OuterVolumeSpecName: "persistence") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "pvc-6ffab986-b438-490d-840c-9462220a192e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947449 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947477 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947508 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") on node \"crc\" " Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947519 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947530 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae950307-1857-4a46-ab98-55843387f128-pod-info\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947540 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae950307-1857-4a46-ab98-55843387f128-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947548 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947557 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq8kk\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-kube-api-access-fq8kk\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947568 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947663 4792 generic.go:334] "Generic (PLEG): container finished" podID="ae950307-1857-4a46-ab98-55843387f128" containerID="00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5" exitCode=0 Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947703 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae950307-1857-4a46-ab98-55843387f128","Type":"ContainerDied","Data":"00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5"} Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947730 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae950307-1857-4a46-ab98-55843387f128","Type":"ContainerDied","Data":"21c9922008dfa0267810397560ff1cadb9a18a683205b9fe39894866c0e924fc"} Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947745 4792 scope.go:117] "RemoveContainer" containerID="00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.947904 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.952440 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-server-conf" (OuterVolumeSpecName: "server-conf") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.997154 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 17:14:07 crc kubenswrapper[4792]: I0319 17:14:07.997349 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ffab986-b438-490d-840c-9462220a192e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e") on node "crc" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.031682 4792 scope.go:117] "RemoveContainer" containerID="7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.034789 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ae950307-1857-4a46-ab98-55843387f128" (UID: "ae950307-1857-4a46-ab98-55843387f128"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.049893 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae950307-1857-4a46-ab98-55843387f128-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.049927 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae950307-1857-4a46-ab98-55843387f128-server-conf\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.049938 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") on node \"crc\" DevicePath \"\"" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.069757 4792 scope.go:117] "RemoveContainer" containerID="00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5" Mar 19 17:14:08 crc kubenswrapper[4792]: E0319 17:14:08.071747 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5\": container with ID starting with 00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5 not found: ID does not exist" containerID="00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.071802 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5"} err="failed to get container status \"00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5\": rpc error: code = NotFound desc = could not find container \"00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5\": container with ID starting with 00ce30c193cc04a1862650078b994e3f730efd7bea428a9d3a046563fe9494e5 not found: ID does not exist" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.071855 4792 scope.go:117] "RemoveContainer" containerID="7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e" Mar 19 17:14:08 crc kubenswrapper[4792]: E0319 17:14:08.073228 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e\": container with ID starting with 7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e not found: ID does not exist" containerID="7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.073278 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e"} err="failed to get container status \"7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e\": rpc error: code = NotFound desc = could not find container \"7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e\": container with ID starting with 7092ef9843fd3d6e6e629482b8365407826a23b5a7baaefaab49fcd4def26d6e not found: ID does not exist" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.287781 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.305084 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.324045 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:14:08 crc kubenswrapper[4792]: E0319 17:14:08.324814 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae950307-1857-4a46-ab98-55843387f128" containerName="rabbitmq" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.324854 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae950307-1857-4a46-ab98-55843387f128" containerName="rabbitmq" Mar 19 17:14:08 crc kubenswrapper[4792]: E0319 17:14:08.324891 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae950307-1857-4a46-ab98-55843387f128" containerName="setup-container" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.324898 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae950307-1857-4a46-ab98-55843387f128" containerName="setup-container" Mar 19 17:14:08 crc kubenswrapper[4792]: E0319 17:14:08.324912 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4785ef-7e44-4cbc-8d9c-d96670a13000" containerName="oc" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.324922 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4785ef-7e44-4cbc-8d9c-d96670a13000" containerName="oc" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.325232 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae950307-1857-4a46-ab98-55843387f128" containerName="rabbitmq" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.325268 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4785ef-7e44-4cbc-8d9c-d96670a13000" containerName="oc" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.326735 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.337406 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.458907 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.458962 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459217 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459282 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459363 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-config-data\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459410 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459430 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/915362ac-1fcd-4e45-9dea-c19af9bee06e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld58g\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-kube-api-access-ld58g\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459509 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/915362ac-1fcd-4e45-9dea-c19af9bee06e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.459553 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.562348 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.562635 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.562752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.562864 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-config-data\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.562967 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563048 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/915362ac-1fcd-4e45-9dea-c19af9bee06e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563120 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld58g\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-kube-api-access-ld58g\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563217 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/915362ac-1fcd-4e45-9dea-c19af9bee06e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563288 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563391 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563572 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563707 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563923 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.562887 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.563617 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-config-data\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.565014 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/915362ac-1fcd-4e45-9dea-c19af9bee06e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.567454 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.568431 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.568723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/915362ac-1fcd-4e45-9dea-c19af9bee06e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.578720 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.578786 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c4c41f9e3b3f86aec75af0301350a0459ec825b041a9d3df4027e381e6ff6c22/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.579650 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/915362ac-1fcd-4e45-9dea-c19af9bee06e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.585916 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld58g\" (UniqueName: \"kubernetes.io/projected/915362ac-1fcd-4e45-9dea-c19af9bee06e-kube-api-access-ld58g\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.653236 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ffab986-b438-490d-840c-9462220a192e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ffab986-b438-490d-840c-9462220a192e\") pod \"rabbitmq-server-0\" (UID: \"915362ac-1fcd-4e45-9dea-c19af9bee06e\") " pod="openstack/rabbitmq-server-0" Mar 19 17:14:08 crc kubenswrapper[4792]: I0319 17:14:08.681530 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 17:14:09 crc kubenswrapper[4792]: W0319 17:14:09.200455 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod915362ac_1fcd_4e45_9dea_c19af9bee06e.slice/crio-511a2c521017902028f144cdabaadbff0e6f6868802af4b0c99fe835fc45ecd8 WatchSource:0}: Error finding container 511a2c521017902028f144cdabaadbff0e6f6868802af4b0c99fe835fc45ecd8: Status 404 returned error can't find the container with id 511a2c521017902028f144cdabaadbff0e6f6868802af4b0c99fe835fc45ecd8 Mar 19 17:14:09 crc kubenswrapper[4792]: I0319 17:14:09.209086 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 17:14:09 crc kubenswrapper[4792]: I0319 17:14:09.752066 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae950307-1857-4a46-ab98-55843387f128" path="/var/lib/kubelet/pods/ae950307-1857-4a46-ab98-55843387f128/volumes" Mar 19 17:14:09 crc kubenswrapper[4792]: I0319 17:14:09.971336 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"915362ac-1fcd-4e45-9dea-c19af9bee06e","Type":"ContainerStarted","Data":"511a2c521017902028f144cdabaadbff0e6f6868802af4b0c99fe835fc45ecd8"} Mar 19 17:14:11 crc kubenswrapper[4792]: I0319 17:14:11.995983 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"915362ac-1fcd-4e45-9dea-c19af9bee06e","Type":"ContainerStarted","Data":"91c1e90ba0982e00887b6fe5661692586b56cd21bf97d21ee0c26e228cc133da"} Mar 19 17:14:43 crc kubenswrapper[4792]: I0319 17:14:43.374273 4792 generic.go:334] "Generic (PLEG): container finished" podID="915362ac-1fcd-4e45-9dea-c19af9bee06e" containerID="91c1e90ba0982e00887b6fe5661692586b56cd21bf97d21ee0c26e228cc133da" exitCode=0 Mar 19 17:14:43 crc kubenswrapper[4792]: I0319 17:14:43.374348 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"915362ac-1fcd-4e45-9dea-c19af9bee06e","Type":"ContainerDied","Data":"91c1e90ba0982e00887b6fe5661692586b56cd21bf97d21ee0c26e228cc133da"} Mar 19 17:14:44 crc kubenswrapper[4792]: I0319 17:14:44.386849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"915362ac-1fcd-4e45-9dea-c19af9bee06e","Type":"ContainerStarted","Data":"53d643a50a0a3a5c673a4fffff466a2684a9b1642c0e40e6a8588daebb26a16e"} Mar 19 17:14:44 crc kubenswrapper[4792]: I0319 17:14:44.388275 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 19 17:14:44 crc kubenswrapper[4792]: I0319 17:14:44.424405 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.424378533 podStartE2EDuration="36.424378533s" podCreationTimestamp="2026-03-19 17:14:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:14:44.405637359 +0000 UTC m=+2047.551694909" watchObservedRunningTime="2026-03-19 17:14:44.424378533 +0000 UTC m=+2047.570436113" Mar 19 17:14:50 crc kubenswrapper[4792]: I0319 17:14:50.231284 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:14:50 crc kubenswrapper[4792]: I0319 17:14:50.231929 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:14:58 crc kubenswrapper[4792]: I0319 17:14:58.685085 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.161312 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv"] Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.163093 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.165137 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.169921 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.176641 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv"] Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.324489 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-config-volume\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.324707 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47nrn\" (UniqueName: \"kubernetes.io/projected/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-kube-api-access-47nrn\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.324888 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-secret-volume\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.427205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47nrn\" (UniqueName: \"kubernetes.io/projected/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-kube-api-access-47nrn\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.427354 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-secret-volume\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.427427 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-config-volume\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.428371 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-config-volume\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.434753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-secret-volume\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.445185 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47nrn\" (UniqueName: \"kubernetes.io/projected/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-kube-api-access-47nrn\") pod \"collect-profiles-29565675-k7vjv\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.494872 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:00 crc kubenswrapper[4792]: I0319 17:15:00.992574 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv"] Mar 19 17:15:01 crc kubenswrapper[4792]: I0319 17:15:01.578354 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" event={"ID":"6b8e0d3b-3d92-47a8-a0ca-34d66790a567","Type":"ContainerStarted","Data":"84e91a5ec03452475a4e1b84540581ce33541d9b0d7c56c8dab7edce3b2daa6e"} Mar 19 17:15:01 crc kubenswrapper[4792]: I0319 17:15:01.578406 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" event={"ID":"6b8e0d3b-3d92-47a8-a0ca-34d66790a567","Type":"ContainerStarted","Data":"fd96ed68b85f183e31930029cc0f27a1cf8571bc79e3c7520eeed1f3b0b283be"} Mar 19 17:15:01 crc kubenswrapper[4792]: I0319 17:15:01.598636 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" podStartSLOduration=1.598608536 podStartE2EDuration="1.598608536s" podCreationTimestamp="2026-03-19 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 17:15:01.591279045 +0000 UTC m=+2064.737336585" watchObservedRunningTime="2026-03-19 17:15:01.598608536 +0000 UTC m=+2064.744666086" Mar 19 17:15:02 crc kubenswrapper[4792]: I0319 17:15:02.591045 4792 generic.go:334] "Generic (PLEG): container finished" podID="6b8e0d3b-3d92-47a8-a0ca-34d66790a567" containerID="84e91a5ec03452475a4e1b84540581ce33541d9b0d7c56c8dab7edce3b2daa6e" exitCode=0 Mar 19 17:15:02 crc kubenswrapper[4792]: I0319 17:15:02.591087 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" event={"ID":"6b8e0d3b-3d92-47a8-a0ca-34d66790a567","Type":"ContainerDied","Data":"84e91a5ec03452475a4e1b84540581ce33541d9b0d7c56c8dab7edce3b2daa6e"} Mar 19 17:15:03 crc kubenswrapper[4792]: I0319 17:15:03.084205 4792 scope.go:117] "RemoveContainer" containerID="42f1fd9797cd24239c8bab1e7501926cf12f9cce920f9d94c0561ac1f7227878" Mar 19 17:15:03 crc kubenswrapper[4792]: I0319 17:15:03.132317 4792 scope.go:117] "RemoveContainer" containerID="dce6f83b932bc60629e2d547e2854e0cc5e89382ce9c281ed71a00d66632a143" Mar 19 17:15:03 crc kubenswrapper[4792]: I0319 17:15:03.258482 4792 scope.go:117] "RemoveContainer" containerID="0f5a4c19f982a0989895ea3de8070e105bb11f88f371c336cdcaf224a07e8247" Mar 19 17:15:03 crc kubenswrapper[4792]: I0319 17:15:03.289404 4792 scope.go:117] "RemoveContainer" containerID="b1debf73ed0650f2a5f3ed8f389791b631455d1d1527fe673753b94e76146f1f" Mar 19 17:15:03 crc kubenswrapper[4792]: I0319 17:15:03.314314 4792 scope.go:117] "RemoveContainer" containerID="c66629016a6691354e540cf9c0e4e4e4b7f4508262b7de34aaec9306ed81224c" Mar 19 17:15:03 crc kubenswrapper[4792]: I0319 17:15:03.336658 4792 scope.go:117] "RemoveContainer" containerID="0d13c388bf16b5850ae5f088e52a018b436a22574a39ede8159d298a56c2b3a5" Mar 19 17:15:03 crc kubenswrapper[4792]: I0319 17:15:03.358979 4792 scope.go:117] "RemoveContainer" containerID="dd260648e24d23ad85a98a24ec7fc1dc6740f4a633b1da050f2c54c87bf3421a" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.078039 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.214380 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-secret-volume\") pod \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.214611 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-config-volume\") pod \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.214700 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47nrn\" (UniqueName: \"kubernetes.io/projected/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-kube-api-access-47nrn\") pod \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\" (UID: \"6b8e0d3b-3d92-47a8-a0ca-34d66790a567\") " Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.215151 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b8e0d3b-3d92-47a8-a0ca-34d66790a567" (UID: "6b8e0d3b-3d92-47a8-a0ca-34d66790a567"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.215629 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.220642 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b8e0d3b-3d92-47a8-a0ca-34d66790a567" (UID: "6b8e0d3b-3d92-47a8-a0ca-34d66790a567"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.221160 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-kube-api-access-47nrn" (OuterVolumeSpecName: "kube-api-access-47nrn") pod "6b8e0d3b-3d92-47a8-a0ca-34d66790a567" (UID: "6b8e0d3b-3d92-47a8-a0ca-34d66790a567"). InnerVolumeSpecName "kube-api-access-47nrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.317822 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47nrn\" (UniqueName: \"kubernetes.io/projected/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-kube-api-access-47nrn\") on node \"crc\" DevicePath \"\"" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.318037 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8e0d3b-3d92-47a8-a0ca-34d66790a567-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.619694 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" event={"ID":"6b8e0d3b-3d92-47a8-a0ca-34d66790a567","Type":"ContainerDied","Data":"fd96ed68b85f183e31930029cc0f27a1cf8571bc79e3c7520eeed1f3b0b283be"} Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.619742 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd96ed68b85f183e31930029cc0f27a1cf8571bc79e3c7520eeed1f3b0b283be" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.619762 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv" Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.683060 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl"] Mar 19 17:15:04 crc kubenswrapper[4792]: I0319 17:15:04.702240 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565630-8fgzl"] Mar 19 17:15:05 crc kubenswrapper[4792]: I0319 17:15:05.758816 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecfa468d-32df-43ac-8884-40aad47fd099" path="/var/lib/kubelet/pods/ecfa468d-32df-43ac-8884-40aad47fd099/volumes" Mar 19 17:15:20 crc kubenswrapper[4792]: I0319 17:15:20.231283 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:15:20 crc kubenswrapper[4792]: I0319 17:15:20.231956 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:15:50 crc kubenswrapper[4792]: I0319 17:15:50.230456 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:15:50 crc kubenswrapper[4792]: I0319 17:15:50.231050 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:15:50 crc kubenswrapper[4792]: I0319 17:15:50.231098 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:15:50 crc kubenswrapper[4792]: I0319 17:15:50.231736 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df04e1de6332da5f3e8a2d9492121d71dee7eaef8067de758696cf9c7212edb6"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:15:50 crc kubenswrapper[4792]: I0319 17:15:50.231778 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://df04e1de6332da5f3e8a2d9492121d71dee7eaef8067de758696cf9c7212edb6" gracePeriod=600 Mar 19 17:15:51 crc kubenswrapper[4792]: I0319 17:15:51.201351 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="df04e1de6332da5f3e8a2d9492121d71dee7eaef8067de758696cf9c7212edb6" exitCode=0 Mar 19 17:15:51 crc kubenswrapper[4792]: I0319 17:15:51.201437 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"df04e1de6332da5f3e8a2d9492121d71dee7eaef8067de758696cf9c7212edb6"} Mar 19 17:15:51 crc kubenswrapper[4792]: I0319 17:15:51.201970 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6"} Mar 19 17:15:51 crc kubenswrapper[4792]: I0319 17:15:51.201995 4792 scope.go:117] "RemoveContainer" containerID="20c1fe08c68bfc2576a1d4f545028ee01801d1c57dc5c1535b0e994beeff178c" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.148751 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565676-4rrrc"] Mar 19 17:16:00 crc kubenswrapper[4792]: E0319 17:16:00.149965 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8e0d3b-3d92-47a8-a0ca-34d66790a567" containerName="collect-profiles" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.149984 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8e0d3b-3d92-47a8-a0ca-34d66790a567" containerName="collect-profiles" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.150333 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8e0d3b-3d92-47a8-a0ca-34d66790a567" containerName="collect-profiles" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.151465 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.155092 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.155390 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.155644 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.161547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565676-4rrrc"] Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.279136 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46xd\" (UniqueName: \"kubernetes.io/projected/a399b4b6-821d-48b7-a4e7-8d67428edfa6-kube-api-access-j46xd\") pod \"auto-csr-approver-29565676-4rrrc\" (UID: \"a399b4b6-821d-48b7-a4e7-8d67428edfa6\") " pod="openshift-infra/auto-csr-approver-29565676-4rrrc" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.382724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46xd\" (UniqueName: \"kubernetes.io/projected/a399b4b6-821d-48b7-a4e7-8d67428edfa6-kube-api-access-j46xd\") pod \"auto-csr-approver-29565676-4rrrc\" (UID: \"a399b4b6-821d-48b7-a4e7-8d67428edfa6\") " pod="openshift-infra/auto-csr-approver-29565676-4rrrc" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.402827 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46xd\" (UniqueName: \"kubernetes.io/projected/a399b4b6-821d-48b7-a4e7-8d67428edfa6-kube-api-access-j46xd\") pod \"auto-csr-approver-29565676-4rrrc\" (UID: \"a399b4b6-821d-48b7-a4e7-8d67428edfa6\") " pod="openshift-infra/auto-csr-approver-29565676-4rrrc" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.475308 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.975805 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565676-4rrrc"] Mar 19 17:16:00 crc kubenswrapper[4792]: I0319 17:16:00.980503 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:16:01 crc kubenswrapper[4792]: I0319 17:16:01.062689 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rd64m"] Mar 19 17:16:01 crc kubenswrapper[4792]: I0319 17:16:01.079193 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rd64m"] Mar 19 17:16:01 crc kubenswrapper[4792]: I0319 17:16:01.309929 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" event={"ID":"a399b4b6-821d-48b7-a4e7-8d67428edfa6","Type":"ContainerStarted","Data":"d829bff901585e7e63f6e071439e5aa041f43e0538a918a066e3b2b768eb43c9"} Mar 19 17:16:01 crc kubenswrapper[4792]: I0319 17:16:01.753483 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c423805e-1778-42a6-a1ac-b44254aa03fe" path="/var/lib/kubelet/pods/c423805e-1778-42a6-a1ac-b44254aa03fe/volumes" Mar 19 17:16:02 crc kubenswrapper[4792]: I0319 17:16:02.036400 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a28a-account-create-update-pxpnr"] Mar 19 17:16:02 crc kubenswrapper[4792]: I0319 17:16:02.049499 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a28a-account-create-update-pxpnr"] Mar 19 17:16:02 crc kubenswrapper[4792]: I0319 17:16:02.322765 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" event={"ID":"a399b4b6-821d-48b7-a4e7-8d67428edfa6","Type":"ContainerStarted","Data":"de1b1ba0db865e7e45c82f8fa7a90f888b31b64f81ef2be5cd00a4708e08e86d"} Mar 19 17:16:02 crc kubenswrapper[4792]: I0319 17:16:02.345356 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" podStartSLOduration=1.408858632 podStartE2EDuration="2.34533539s" podCreationTimestamp="2026-03-19 17:16:00 +0000 UTC" firstStartedPulling="2026-03-19 17:16:00.980281447 +0000 UTC m=+2124.126338987" lastFinishedPulling="2026-03-19 17:16:01.916758205 +0000 UTC m=+2125.062815745" observedRunningTime="2026-03-19 17:16:02.339639984 +0000 UTC m=+2125.485697524" watchObservedRunningTime="2026-03-19 17:16:02.34533539 +0000 UTC m=+2125.491392920" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.060222 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-h8f7f"] Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.078459 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-bf32-account-create-update-gwgzp"] Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.096991 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-eeeb-account-create-update-jr92l"] Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.114045 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cbpz5"] Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.129253 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-h8f7f"] Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.144074 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-bf32-account-create-update-gwgzp"] Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.158791 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-eeeb-account-create-update-jr92l"] Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.172623 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cbpz5"] Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.337727 4792 generic.go:334] "Generic (PLEG): container finished" podID="a399b4b6-821d-48b7-a4e7-8d67428edfa6" containerID="de1b1ba0db865e7e45c82f8fa7a90f888b31b64f81ef2be5cd00a4708e08e86d" exitCode=0 Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.337776 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" event={"ID":"a399b4b6-821d-48b7-a4e7-8d67428edfa6","Type":"ContainerDied","Data":"de1b1ba0db865e7e45c82f8fa7a90f888b31b64f81ef2be5cd00a4708e08e86d"} Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.513870 4792 scope.go:117] "RemoveContainer" containerID="98c6d049a8c997ad0114db655559f053f2cdb78e59de8e1a77127e87b29df356" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.541363 4792 scope.go:117] "RemoveContainer" containerID="50326bf6fa3a07a36f840845b93e6822fa6056888b7f7d0b3d24ded829d1a2a2" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.563516 4792 scope.go:117] "RemoveContainer" containerID="03768a66eac5cea8438b6aa509ee6a5c2533893528703cb3a40c6839aa3ff249" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.593530 4792 scope.go:117] "RemoveContainer" containerID="a5819ecc588c431f8a724682fd8f13a1eed5dc7d0490b65174272becc7b549ee" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.621119 4792 scope.go:117] "RemoveContainer" containerID="91db90c2c9f57962d32afa945048c1466b78413b1c70ca48224a67dca79149c7" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.648201 4792 scope.go:117] "RemoveContainer" containerID="28d7b8fda4fe20ed08ac788846fc922ad7b15904f8b6c58ee2eb695ab36fc647" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.691556 4792 scope.go:117] "RemoveContainer" containerID="bde8d4cb05f66c4e8fe49ae1e4c2c0325aa27edc0acddb72d8cbfedff8947fa5" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.737352 4792 scope.go:117] "RemoveContainer" containerID="9ad04bd5c1c44315e2e6a7226b93f4435d7e189f7f6fb7df9f8221bac04bd8c7" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.768717 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34502e69-2af6-4cbd-8854-753cd654fc49" path="/var/lib/kubelet/pods/34502e69-2af6-4cbd-8854-753cd654fc49/volumes" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.770800 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ccf938-23a0-4851-8cc4-a30bc91fdf3a" path="/var/lib/kubelet/pods/66ccf938-23a0-4851-8cc4-a30bc91fdf3a/volumes" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.771495 4792 scope.go:117] "RemoveContainer" containerID="2ee98adfec418dd9ab41ad9d9f01da3b42eb2da6ed01318b9a2d5496b22376b6" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.771625 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce715d39-f1ec-46f6-be8b-de76de850a77" path="/var/lib/kubelet/pods/ce715d39-f1ec-46f6-be8b-de76de850a77/volumes" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.772487 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16f1378-b929-44f2-a851-c4de7620ae5b" path="/var/lib/kubelet/pods/e16f1378-b929-44f2-a851-c4de7620ae5b/volumes" Mar 19 17:16:03 crc kubenswrapper[4792]: I0319 17:16:03.773738 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd49301f-b993-4978-ad6a-393fc9fdcb64" path="/var/lib/kubelet/pods/fd49301f-b993-4978-ad6a-393fc9fdcb64/volumes" Mar 19 17:16:04 crc kubenswrapper[4792]: I0319 17:16:04.037276 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-sth98"] Mar 19 17:16:04 crc kubenswrapper[4792]: I0319 17:16:04.053701 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-sth98"] Mar 19 17:16:04 crc kubenswrapper[4792]: I0319 17:16:04.741862 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" Mar 19 17:16:04 crc kubenswrapper[4792]: I0319 17:16:04.908849 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46xd\" (UniqueName: \"kubernetes.io/projected/a399b4b6-821d-48b7-a4e7-8d67428edfa6-kube-api-access-j46xd\") pod \"a399b4b6-821d-48b7-a4e7-8d67428edfa6\" (UID: \"a399b4b6-821d-48b7-a4e7-8d67428edfa6\") " Mar 19 17:16:04 crc kubenswrapper[4792]: I0319 17:16:04.915681 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a399b4b6-821d-48b7-a4e7-8d67428edfa6-kube-api-access-j46xd" (OuterVolumeSpecName: "kube-api-access-j46xd") pod "a399b4b6-821d-48b7-a4e7-8d67428edfa6" (UID: "a399b4b6-821d-48b7-a4e7-8d67428edfa6"). InnerVolumeSpecName "kube-api-access-j46xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.013033 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j46xd\" (UniqueName: \"kubernetes.io/projected/a399b4b6-821d-48b7-a4e7-8d67428edfa6-kube-api-access-j46xd\") on node \"crc\" DevicePath \"\"" Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.030638 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-aede-account-create-update-4zdv8"] Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.043450 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-aede-account-create-update-4zdv8"] Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.388803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" event={"ID":"a399b4b6-821d-48b7-a4e7-8d67428edfa6","Type":"ContainerDied","Data":"d829bff901585e7e63f6e071439e5aa041f43e0538a918a066e3b2b768eb43c9"} Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.388866 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d829bff901585e7e63f6e071439e5aa041f43e0538a918a066e3b2b768eb43c9" Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.388894 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565676-4rrrc" Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.397401 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565670-wjmq6"] Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.408211 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565670-wjmq6"] Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.754029 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28af1b81-6dbb-4b6c-ab1d-774bed9bc419" path="/var/lib/kubelet/pods/28af1b81-6dbb-4b6c-ab1d-774bed9bc419/volumes" Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.755329 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cfc4db-45c9-4d06-9244-9fa9dc88b94b" path="/var/lib/kubelet/pods/36cfc4db-45c9-4d06-9244-9fa9dc88b94b/volumes" Mar 19 17:16:05 crc kubenswrapper[4792]: I0319 17:16:05.756107 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946d010a-c057-4057-aa6a-87e4d739df92" path="/var/lib/kubelet/pods/946d010a-c057-4057-aa6a-87e4d739df92/volumes" Mar 19 17:16:06 crc kubenswrapper[4792]: I0319 17:16:06.030754 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9q76v"] Mar 19 17:16:06 crc kubenswrapper[4792]: I0319 17:16:06.043296 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9q76v"] Mar 19 17:16:07 crc kubenswrapper[4792]: I0319 17:16:07.753412 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a1f309-b03a-483d-8df7-7539bba505a1" path="/var/lib/kubelet/pods/b8a1f309-b03a-483d-8df7-7539bba505a1/volumes" Mar 19 17:16:13 crc kubenswrapper[4792]: I0319 17:16:13.054186 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-8ac8-account-create-update-l6zkp"] Mar 19 17:16:13 crc kubenswrapper[4792]: I0319 17:16:13.070590 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b"] Mar 19 17:16:13 crc kubenswrapper[4792]: I0319 17:16:13.083598 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dmv9b"] Mar 19 17:16:13 crc kubenswrapper[4792]: I0319 17:16:13.096427 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-8ac8-account-create-update-l6zkp"] Mar 19 17:16:13 crc kubenswrapper[4792]: I0319 17:16:13.755807 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62179f52-7a2c-4ca8-91e3-9fd241d9b1e6" path="/var/lib/kubelet/pods/62179f52-7a2c-4ca8-91e3-9fd241d9b1e6/volumes" Mar 19 17:16:13 crc kubenswrapper[4792]: I0319 17:16:13.756961 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76877385-6964-4f62-a8e7-9d73a772c630" path="/var/lib/kubelet/pods/76877385-6964-4f62-a8e7-9d73a772c630/volumes" Mar 19 17:16:17 crc kubenswrapper[4792]: I0319 17:16:17.510940 4792 generic.go:334] "Generic (PLEG): container finished" podID="9a911839-8c9b-43da-9ef6-eed89833426e" containerID="a7db00d01d42ab4943e0bd3f49f1be252108af28b71b14e07fa3735fdebad16b" exitCode=0 Mar 19 17:16:17 crc kubenswrapper[4792]: I0319 17:16:17.511019 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" event={"ID":"9a911839-8c9b-43da-9ef6-eed89833426e","Type":"ContainerDied","Data":"a7db00d01d42ab4943e0bd3f49f1be252108af28b71b14e07fa3735fdebad16b"} Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.033970 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.112961 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-ssh-key-openstack-edpm-ipam\") pod \"9a911839-8c9b-43da-9ef6-eed89833426e\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.113030 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-inventory\") pod \"9a911839-8c9b-43da-9ef6-eed89833426e\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.113098 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-bootstrap-combined-ca-bundle\") pod \"9a911839-8c9b-43da-9ef6-eed89833426e\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.113218 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9d92\" (UniqueName: \"kubernetes.io/projected/9a911839-8c9b-43da-9ef6-eed89833426e-kube-api-access-s9d92\") pod \"9a911839-8c9b-43da-9ef6-eed89833426e\" (UID: \"9a911839-8c9b-43da-9ef6-eed89833426e\") " Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.119560 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a911839-8c9b-43da-9ef6-eed89833426e-kube-api-access-s9d92" (OuterVolumeSpecName: "kube-api-access-s9d92") pod "9a911839-8c9b-43da-9ef6-eed89833426e" (UID: "9a911839-8c9b-43da-9ef6-eed89833426e"). InnerVolumeSpecName "kube-api-access-s9d92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.119597 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9a911839-8c9b-43da-9ef6-eed89833426e" (UID: "9a911839-8c9b-43da-9ef6-eed89833426e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.148028 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-inventory" (OuterVolumeSpecName: "inventory") pod "9a911839-8c9b-43da-9ef6-eed89833426e" (UID: "9a911839-8c9b-43da-9ef6-eed89833426e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.148435 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a911839-8c9b-43da-9ef6-eed89833426e" (UID: "9a911839-8c9b-43da-9ef6-eed89833426e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.216654 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.216697 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.216711 4792 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a911839-8c9b-43da-9ef6-eed89833426e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.216725 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9d92\" (UniqueName: \"kubernetes.io/projected/9a911839-8c9b-43da-9ef6-eed89833426e-kube-api-access-s9d92\") on node \"crc\" DevicePath \"\"" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.542738 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" event={"ID":"9a911839-8c9b-43da-9ef6-eed89833426e","Type":"ContainerDied","Data":"b7077a67eb740188445f72a1a33dee5963597abf689b05b4dd2c0134594800d5"} Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.542806 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7077a67eb740188445f72a1a33dee5963597abf689b05b4dd2c0134594800d5" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.542912 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.664632 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h"] Mar 19 17:16:19 crc kubenswrapper[4792]: E0319 17:16:19.665425 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a911839-8c9b-43da-9ef6-eed89833426e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.665441 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a911839-8c9b-43da-9ef6-eed89833426e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 19 17:16:19 crc kubenswrapper[4792]: E0319 17:16:19.665456 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a399b4b6-821d-48b7-a4e7-8d67428edfa6" containerName="oc" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.665463 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a399b4b6-821d-48b7-a4e7-8d67428edfa6" containerName="oc" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.665707 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a399b4b6-821d-48b7-a4e7-8d67428edfa6" containerName="oc" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.665731 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a911839-8c9b-43da-9ef6-eed89833426e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.666548 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.678099 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.678136 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.678695 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.679316 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h"] Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.679617 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.729374 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.729554 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xznv\" (UniqueName: \"kubernetes.io/projected/fbd720bf-e288-4d7c-8c10-4f61bdfee093-kube-api-access-4xznv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.729666 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.831680 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.831871 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xznv\" (UniqueName: \"kubernetes.io/projected/fbd720bf-e288-4d7c-8c10-4f61bdfee093-kube-api-access-4xznv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.832033 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.838608 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.841283 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.857937 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xznv\" (UniqueName: \"kubernetes.io/projected/fbd720bf-e288-4d7c-8c10-4f61bdfee093-kube-api-access-4xznv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-52p9h\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:19 crc kubenswrapper[4792]: I0319 17:16:19.985589 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:16:20 crc kubenswrapper[4792]: I0319 17:16:20.570888 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h"] Mar 19 17:16:21 crc kubenswrapper[4792]: I0319 17:16:21.567817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" event={"ID":"fbd720bf-e288-4d7c-8c10-4f61bdfee093","Type":"ContainerStarted","Data":"5d77bd00ee76392520c17a15b4b99b3a099faa63a8115b554dfd3c947cf5e6f4"} Mar 19 17:16:22 crc kubenswrapper[4792]: I0319 17:16:22.591920 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" event={"ID":"fbd720bf-e288-4d7c-8c10-4f61bdfee093","Type":"ContainerStarted","Data":"5b010a80a5fcdb2abdd00f09caffae15dba0fb0c6c445505094a9c73b73194a6"} Mar 19 17:16:22 crc kubenswrapper[4792]: I0319 17:16:22.626158 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" podStartSLOduration=2.606611603 podStartE2EDuration="3.626133511s" podCreationTimestamp="2026-03-19 17:16:19 +0000 UTC" firstStartedPulling="2026-03-19 17:16:20.578231334 +0000 UTC m=+2143.724288874" lastFinishedPulling="2026-03-19 17:16:21.597753222 +0000 UTC m=+2144.743810782" observedRunningTime="2026-03-19 17:16:22.610402294 +0000 UTC m=+2145.756459834" watchObservedRunningTime="2026-03-19 17:16:22.626133511 +0000 UTC m=+2145.772191051" Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.049598 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4eab-account-create-update-b4vhn"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.061208 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4eab-account-create-update-b4vhn"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.075160 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-gpcnr"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.089809 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4jvsw"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.102914 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-395f-account-create-update-btrf9"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.112167 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-8475-account-create-update-j72dl"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.123067 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-tjs8v"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.133406 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-gpcnr"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.144491 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4jvsw"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.154617 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0be0-account-create-update-8mkbd"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.166077 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-68wjz"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.176444 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-tjs8v"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.186759 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-8475-account-create-update-j72dl"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.199115 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0be0-account-create-update-8mkbd"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.208196 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-68wjz"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.218339 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-395f-account-create-update-btrf9"] Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.756708 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447122e9-4195-4d8b-992d-dc435c22fa07" path="/var/lib/kubelet/pods/447122e9-4195-4d8b-992d-dc435c22fa07/volumes" Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.758652 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50dd4286-d2f2-4c9b-a80d-e4731ddc902b" path="/var/lib/kubelet/pods/50dd4286-d2f2-4c9b-a80d-e4731ddc902b/volumes" Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.760050 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e42a2c-5486-4292-8810-da11833a706a" path="/var/lib/kubelet/pods/54e42a2c-5486-4292-8810-da11833a706a/volumes" Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.761175 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606a03e6-0ad3-4369-9921-f68f56b278f4" path="/var/lib/kubelet/pods/606a03e6-0ad3-4369-9921-f68f56b278f4/volumes" Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.762754 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8358a03b-42d8-46b5-ab30-b4ac6486da4f" path="/var/lib/kubelet/pods/8358a03b-42d8-46b5-ab30-b4ac6486da4f/volumes" Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.763833 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855fda36-92fa-4c54-8976-43639fa2ee51" path="/var/lib/kubelet/pods/855fda36-92fa-4c54-8976-43639fa2ee51/volumes" Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.764945 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8eaf3e-d60d-4940-8fed-d307ef4afd12" path="/var/lib/kubelet/pods/de8eaf3e-d60d-4940-8fed-d307ef4afd12/volumes" Mar 19 17:16:37 crc kubenswrapper[4792]: I0319 17:16:37.767205 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b04989-4417-4b0e-9a41-f4980d079a45" path="/var/lib/kubelet/pods/e2b04989-4417-4b0e-9a41-f4980d079a45/volumes" Mar 19 17:16:43 crc kubenswrapper[4792]: I0319 17:16:43.034809 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-mmz5r"] Mar 19 17:16:43 crc kubenswrapper[4792]: I0319 17:16:43.048945 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-mmz5r"] Mar 19 17:16:43 crc kubenswrapper[4792]: I0319 17:16:43.753041 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efad1545-5a1e-45ab-bf50-952c2c8eeba9" path="/var/lib/kubelet/pods/efad1545-5a1e-45ab-bf50-952c2c8eeba9/volumes" Mar 19 17:16:46 crc kubenswrapper[4792]: I0319 17:16:46.027964 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xbdtj"] Mar 19 17:16:46 crc kubenswrapper[4792]: I0319 17:16:46.072450 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xbdtj"] Mar 19 17:16:47 crc kubenswrapper[4792]: I0319 17:16:47.750999 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193d3d1f-e773-4b86-a176-ddb5c7727e39" path="/var/lib/kubelet/pods/193d3d1f-e773-4b86-a176-ddb5c7727e39/volumes" Mar 19 17:17:03 crc kubenswrapper[4792]: I0319 17:17:03.959764 4792 scope.go:117] "RemoveContainer" containerID="364d68280989e43a835519963ba37ac7a205fef71c3d808b9ad18eaddbaf008f" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.002301 4792 scope.go:117] "RemoveContainer" containerID="5aca78e91c27c60e51b5222cd7268fb1f170e79d2a61f61d2e7cbdc08a60e901" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.067151 4792 scope.go:117] "RemoveContainer" containerID="6a45785ead2d5da13929da215c94e3ad1d6e7f2d014ad90b6df5e5b47c9adf29" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.101942 4792 scope.go:117] "RemoveContainer" containerID="8c64c2fbfaf423d4d5ee1649fa106580e4fa6598809de5a77fb177bc3f891de7" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.161341 4792 scope.go:117] "RemoveContainer" containerID="3ab5a8bd68453441589c53c619b9e992e73095eccb2ad26b200e6839b26da278" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.210333 4792 scope.go:117] "RemoveContainer" containerID="db7f74c2ecc023c5a4ef594a7ac15a70c9ba2e00a8e643aebd0b5d68b73ea062" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.269586 4792 scope.go:117] "RemoveContainer" containerID="c2d4783ebd3950583d6c59450d4d0dac69bdd8819b685dae1917d852b004270e" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.317284 4792 scope.go:117] "RemoveContainer" containerID="48cd40e5cca64520aae05d0984acd1be08bb083b1211e8782cd56040bdac4f23" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.338122 4792 scope.go:117] "RemoveContainer" containerID="4bc5a05413e5e2b531a68186d33151c9ba55e703e341a758957000414e23a12e" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.360528 4792 scope.go:117] "RemoveContainer" containerID="4ecc6b182f8af45cc28aa535c68f33dbcf0c8ab9802e1610402108d621d13500" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.391410 4792 scope.go:117] "RemoveContainer" containerID="abc92c4d5e332e7935d081fddda3e7e0b52da9373251052abda89113d457ad36" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.414923 4792 scope.go:117] "RemoveContainer" containerID="185bdaf97883defdde6f3f1d6b8bf1c91e29b94c9a9315bd83b34345ff74cf2e" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.434346 4792 scope.go:117] "RemoveContainer" containerID="fe27ba4ea0fedff04363a0fdca368804063f4a47e94f1dcd031115cea9503a76" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.458933 4792 scope.go:117] "RemoveContainer" containerID="2e4da9393a4cd015582f8bc3e191fdf46ffd0695a530de29cc25437e905395c1" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.481902 4792 scope.go:117] "RemoveContainer" containerID="7e1852b51e8511f8e10297200c675aa151f1049e2881185f705aefa87f55c0e0" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.514582 4792 scope.go:117] "RemoveContainer" containerID="6ded6d9f98da1353d7adfa80be6fd2d1df2064d41ccc90542bb500a56c2b4d0f" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.533469 4792 scope.go:117] "RemoveContainer" containerID="3558738f40913abb138aa214cad4d5f68e91f22874d7f3a7e6ecb70cc1109a8c" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.558115 4792 scope.go:117] "RemoveContainer" containerID="e11b72e5f6c033109a62cf3fb361d74c864047f2ff5f54af8d17e9bac68c9fd4" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.579394 4792 scope.go:117] "RemoveContainer" containerID="38f4dc509c4de4086d81c080a5bc3674b3855afd052c101d9616e4f178f23030" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.598657 4792 scope.go:117] "RemoveContainer" containerID="93a0547664e0a71ac8735e5c9a5c44d897bb4f5e04c7fa9ef6cc50f0e2d57f22" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.618625 4792 scope.go:117] "RemoveContainer" containerID="64eaffe2b280b2e08a5d878475b75b106dbcfd412100fee0183c7b96844ffe5b" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.640014 4792 scope.go:117] "RemoveContainer" containerID="5d34c95811a754bd895a6883376101a277ed72f1df976aac47b6f0107cdeae95" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.659812 4792 scope.go:117] "RemoveContainer" containerID="a515db8d10c065dbbaba4db5e26cdd1e01a4271ffe0fecbe58bd4625f3e618b3" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.685537 4792 scope.go:117] "RemoveContainer" containerID="ebc71e2e8892fb06195cc7dd18fb6b6c446e5f7568813734f5821cd5fdeb9d49" Mar 19 17:17:04 crc kubenswrapper[4792]: I0319 17:17:04.715166 4792 scope.go:117] "RemoveContainer" containerID="60d45b0fc68c6ad678ddc6f7e4ac6ef7051520603e10954c066f5155e487aa19" Mar 19 17:17:19 crc kubenswrapper[4792]: I0319 17:17:19.060238 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2chz4"] Mar 19 17:17:19 crc kubenswrapper[4792]: I0319 17:17:19.080387 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2chz4"] Mar 19 17:17:19 crc kubenswrapper[4792]: I0319 17:17:19.762337 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b09649-8b3c-4328-97b8-5c5c8d3e198b" path="/var/lib/kubelet/pods/13b09649-8b3c-4328-97b8-5c5c8d3e198b/volumes" Mar 19 17:17:26 crc kubenswrapper[4792]: I0319 17:17:26.067896 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wm2lm"] Mar 19 17:17:26 crc kubenswrapper[4792]: I0319 17:17:26.079617 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h8xdp"] Mar 19 17:17:26 crc kubenswrapper[4792]: I0319 17:17:26.091398 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-86jjn"] Mar 19 17:17:26 crc kubenswrapper[4792]: I0319 17:17:26.100852 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wm2lm"] Mar 19 17:17:26 crc kubenswrapper[4792]: I0319 17:17:26.110570 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-86jjn"] Mar 19 17:17:26 crc kubenswrapper[4792]: I0319 17:17:26.120386 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h8xdp"] Mar 19 17:17:27 crc kubenswrapper[4792]: I0319 17:17:27.756895 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03107c0e-b888-4df4-892a-daebb217a18e" path="/var/lib/kubelet/pods/03107c0e-b888-4df4-892a-daebb217a18e/volumes" Mar 19 17:17:27 crc kubenswrapper[4792]: I0319 17:17:27.760414 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="398bc201-2c6c-4434-ad7a-208f048b9f5c" path="/var/lib/kubelet/pods/398bc201-2c6c-4434-ad7a-208f048b9f5c/volumes" Mar 19 17:17:27 crc kubenswrapper[4792]: I0319 17:17:27.762799 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567d324f-126d-4f06-91df-d2d84fd836f3" path="/var/lib/kubelet/pods/567d324f-126d-4f06-91df-d2d84fd836f3/volumes" Mar 19 17:17:50 crc kubenswrapper[4792]: I0319 17:17:50.230944 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:17:50 crc kubenswrapper[4792]: I0319 17:17:50.231414 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:17:52 crc kubenswrapper[4792]: I0319 17:17:52.848407 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8ww6r"] Mar 19 17:17:52 crc kubenswrapper[4792]: I0319 17:17:52.857113 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:52 crc kubenswrapper[4792]: I0319 17:17:52.870799 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ww6r"] Mar 19 17:17:52 crc kubenswrapper[4792]: I0319 17:17:52.944745 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-utilities\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:52 crc kubenswrapper[4792]: I0319 17:17:52.945132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krkbv\" (UniqueName: \"kubernetes.io/projected/781dfb1a-482d-453e-9963-60a57d9d37a1-kube-api-access-krkbv\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:52 crc kubenswrapper[4792]: I0319 17:17:52.945376 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-catalog-content\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:53 crc kubenswrapper[4792]: I0319 17:17:53.047194 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-catalog-content\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:53 crc kubenswrapper[4792]: I0319 17:17:53.047277 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-utilities\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:53 crc kubenswrapper[4792]: I0319 17:17:53.047320 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krkbv\" (UniqueName: \"kubernetes.io/projected/781dfb1a-482d-453e-9963-60a57d9d37a1-kube-api-access-krkbv\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:53 crc kubenswrapper[4792]: I0319 17:17:53.048053 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-catalog-content\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:53 crc kubenswrapper[4792]: I0319 17:17:53.048251 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-utilities\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:53 crc kubenswrapper[4792]: I0319 17:17:53.067796 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krkbv\" (UniqueName: \"kubernetes.io/projected/781dfb1a-482d-453e-9963-60a57d9d37a1-kube-api-access-krkbv\") pod \"redhat-operators-8ww6r\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:53 crc kubenswrapper[4792]: I0319 17:17:53.186069 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:17:53 crc kubenswrapper[4792]: I0319 17:17:53.699480 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ww6r"] Mar 19 17:17:53 crc kubenswrapper[4792]: W0319 17:17:53.703950 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod781dfb1a_482d_453e_9963_60a57d9d37a1.slice/crio-98803d25efca3a8bb8e384177cec7fc9f3ff3ad672a604390926d4ef3d5e6626 WatchSource:0}: Error finding container 98803d25efca3a8bb8e384177cec7fc9f3ff3ad672a604390926d4ef3d5e6626: Status 404 returned error can't find the container with id 98803d25efca3a8bb8e384177cec7fc9f3ff3ad672a604390926d4ef3d5e6626 Mar 19 17:17:54 crc kubenswrapper[4792]: I0319 17:17:54.572351 4792 generic.go:334] "Generic (PLEG): container finished" podID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerID="2131b9688aa0bad94912a221c649c100da5c89e5a44f6ba0fa788a344144747c" exitCode=0 Mar 19 17:17:54 crc kubenswrapper[4792]: I0319 17:17:54.572488 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ww6r" event={"ID":"781dfb1a-482d-453e-9963-60a57d9d37a1","Type":"ContainerDied","Data":"2131b9688aa0bad94912a221c649c100da5c89e5a44f6ba0fa788a344144747c"} Mar 19 17:17:54 crc kubenswrapper[4792]: I0319 17:17:54.572890 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ww6r" event={"ID":"781dfb1a-482d-453e-9963-60a57d9d37a1","Type":"ContainerStarted","Data":"98803d25efca3a8bb8e384177cec7fc9f3ff3ad672a604390926d4ef3d5e6626"} Mar 19 17:17:55 crc kubenswrapper[4792]: I0319 17:17:55.055404 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6ftwc"] Mar 19 17:17:55 crc kubenswrapper[4792]: I0319 17:17:55.065813 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6ftwc"] Mar 19 17:17:55 crc kubenswrapper[4792]: I0319 17:17:55.756306 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef634102-a683-498b-ad98-61d470f7fefa" path="/var/lib/kubelet/pods/ef634102-a683-498b-ad98-61d470f7fefa/volumes" Mar 19 17:17:56 crc kubenswrapper[4792]: I0319 17:17:56.597645 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ww6r" event={"ID":"781dfb1a-482d-453e-9963-60a57d9d37a1","Type":"ContainerStarted","Data":"a93f0052a2f257e72063fc338ca14da77f58ffa44903abf86bf49def745d65a9"} Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.159180 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565678-kbpzb"] Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.161415 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565678-kbpzb" Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.163335 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.163535 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.163585 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.174534 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565678-kbpzb"] Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.323756 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7jc\" (UniqueName: \"kubernetes.io/projected/01596d95-4a47-495b-8ba8-d62187e696ee-kube-api-access-2b7jc\") pod \"auto-csr-approver-29565678-kbpzb\" (UID: \"01596d95-4a47-495b-8ba8-d62187e696ee\") " pod="openshift-infra/auto-csr-approver-29565678-kbpzb" Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.426946 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7jc\" (UniqueName: \"kubernetes.io/projected/01596d95-4a47-495b-8ba8-d62187e696ee-kube-api-access-2b7jc\") pod \"auto-csr-approver-29565678-kbpzb\" (UID: \"01596d95-4a47-495b-8ba8-d62187e696ee\") " pod="openshift-infra/auto-csr-approver-29565678-kbpzb" Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.446359 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7jc\" (UniqueName: \"kubernetes.io/projected/01596d95-4a47-495b-8ba8-d62187e696ee-kube-api-access-2b7jc\") pod \"auto-csr-approver-29565678-kbpzb\" (UID: \"01596d95-4a47-495b-8ba8-d62187e696ee\") " pod="openshift-infra/auto-csr-approver-29565678-kbpzb" Mar 19 17:18:00 crc kubenswrapper[4792]: I0319 17:18:00.485031 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565678-kbpzb" Mar 19 17:18:01 crc kubenswrapper[4792]: I0319 17:18:01.245039 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565678-kbpzb"] Mar 19 17:18:01 crc kubenswrapper[4792]: W0319 17:18:01.262994 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01596d95_4a47_495b_8ba8_d62187e696ee.slice/crio-d1a21cdaede50437236a39649dc62aae810873183be06e06d112b7e8baa35e5d WatchSource:0}: Error finding container d1a21cdaede50437236a39649dc62aae810873183be06e06d112b7e8baa35e5d: Status 404 returned error can't find the container with id d1a21cdaede50437236a39649dc62aae810873183be06e06d112b7e8baa35e5d Mar 19 17:18:01 crc kubenswrapper[4792]: I0319 17:18:01.660232 4792 generic.go:334] "Generic (PLEG): container finished" podID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerID="a93f0052a2f257e72063fc338ca14da77f58ffa44903abf86bf49def745d65a9" exitCode=0 Mar 19 17:18:01 crc kubenswrapper[4792]: I0319 17:18:01.660305 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ww6r" event={"ID":"781dfb1a-482d-453e-9963-60a57d9d37a1","Type":"ContainerDied","Data":"a93f0052a2f257e72063fc338ca14da77f58ffa44903abf86bf49def745d65a9"} Mar 19 17:18:01 crc kubenswrapper[4792]: I0319 17:18:01.663342 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565678-kbpzb" event={"ID":"01596d95-4a47-495b-8ba8-d62187e696ee","Type":"ContainerStarted","Data":"d1a21cdaede50437236a39649dc62aae810873183be06e06d112b7e8baa35e5d"} Mar 19 17:18:02 crc kubenswrapper[4792]: I0319 17:18:02.676078 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ww6r" event={"ID":"781dfb1a-482d-453e-9963-60a57d9d37a1","Type":"ContainerStarted","Data":"bcf0d30310f5467c50f4c960a522289cb71da6c7f7f8bdc64eb5655614c3cde7"} Mar 19 17:18:02 crc kubenswrapper[4792]: I0319 17:18:02.706898 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8ww6r" podStartSLOduration=3.166574519 podStartE2EDuration="10.706880103s" podCreationTimestamp="2026-03-19 17:17:52 +0000 UTC" firstStartedPulling="2026-03-19 17:17:54.574740323 +0000 UTC m=+2237.720797863" lastFinishedPulling="2026-03-19 17:18:02.115045907 +0000 UTC m=+2245.261103447" observedRunningTime="2026-03-19 17:18:02.694950788 +0000 UTC m=+2245.841008328" watchObservedRunningTime="2026-03-19 17:18:02.706880103 +0000 UTC m=+2245.852937643" Mar 19 17:18:03 crc kubenswrapper[4792]: I0319 17:18:03.186248 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:18:03 crc kubenswrapper[4792]: I0319 17:18:03.186564 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:18:03 crc kubenswrapper[4792]: I0319 17:18:03.687035 4792 generic.go:334] "Generic (PLEG): container finished" podID="01596d95-4a47-495b-8ba8-d62187e696ee" containerID="7b5c1f04ccc52d5ca95ea356f68ae069515015264923d7a8972181b8f60a0bcf" exitCode=0 Mar 19 17:18:03 crc kubenswrapper[4792]: I0319 17:18:03.688113 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565678-kbpzb" event={"ID":"01596d95-4a47-495b-8ba8-d62187e696ee","Type":"ContainerDied","Data":"7b5c1f04ccc52d5ca95ea356f68ae069515015264923d7a8972181b8f60a0bcf"} Mar 19 17:18:04 crc kubenswrapper[4792]: I0319 17:18:04.234898 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8ww6r" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="registry-server" probeResult="failure" output=< Mar 19 17:18:04 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:18:04 crc kubenswrapper[4792]: > Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.148822 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565678-kbpzb" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.149467 4792 scope.go:117] "RemoveContainer" containerID="be1d7532550a7578e71a2043fef89ab8d93bef6083d985c1c48af294bd01a3c6" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.179046 4792 scope.go:117] "RemoveContainer" containerID="919d8ed8b8f0c3e2484ed415e3b412db4ed9c307d4cdf717f0c84cf8e2050417" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.238366 4792 scope.go:117] "RemoveContainer" containerID="ef5ed9526e3a9f2edb00d59d16e401ab9d770cdc8d505864ad15ad8f04b617a8" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.269636 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b7jc\" (UniqueName: \"kubernetes.io/projected/01596d95-4a47-495b-8ba8-d62187e696ee-kube-api-access-2b7jc\") pod \"01596d95-4a47-495b-8ba8-d62187e696ee\" (UID: \"01596d95-4a47-495b-8ba8-d62187e696ee\") " Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.277795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01596d95-4a47-495b-8ba8-d62187e696ee-kube-api-access-2b7jc" (OuterVolumeSpecName: "kube-api-access-2b7jc") pod "01596d95-4a47-495b-8ba8-d62187e696ee" (UID: "01596d95-4a47-495b-8ba8-d62187e696ee"). InnerVolumeSpecName "kube-api-access-2b7jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.308068 4792 scope.go:117] "RemoveContainer" containerID="6ce1093c98733ad0c9ca979f6fb001e0e8eb02348927ecb86be0c76bf0dce482" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.358419 4792 scope.go:117] "RemoveContainer" containerID="cfe5f948852c69429a2761f42618a607b6376d98d36510e06fcb0c187d584dd9" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.374210 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b7jc\" (UniqueName: \"kubernetes.io/projected/01596d95-4a47-495b-8ba8-d62187e696ee-kube-api-access-2b7jc\") on node \"crc\" DevicePath \"\"" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.707161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565678-kbpzb" event={"ID":"01596d95-4a47-495b-8ba8-d62187e696ee","Type":"ContainerDied","Data":"d1a21cdaede50437236a39649dc62aae810873183be06e06d112b7e8baa35e5d"} Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.707611 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a21cdaede50437236a39649dc62aae810873183be06e06d112b7e8baa35e5d" Mar 19 17:18:05 crc kubenswrapper[4792]: I0319 17:18:05.707223 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565678-kbpzb" Mar 19 17:18:06 crc kubenswrapper[4792]: I0319 17:18:06.228332 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565672-7gqn5"] Mar 19 17:18:06 crc kubenswrapper[4792]: I0319 17:18:06.246361 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565672-7gqn5"] Mar 19 17:18:07 crc kubenswrapper[4792]: I0319 17:18:07.753356 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d54b5d3-d6b5-428c-9e78-ab45a7af529b" path="/var/lib/kubelet/pods/4d54b5d3-d6b5-428c-9e78-ab45a7af529b/volumes" Mar 19 17:18:11 crc kubenswrapper[4792]: I0319 17:18:11.773548 4792 generic.go:334] "Generic (PLEG): container finished" podID="fbd720bf-e288-4d7c-8c10-4f61bdfee093" containerID="5b010a80a5fcdb2abdd00f09caffae15dba0fb0c6c445505094a9c73b73194a6" exitCode=0 Mar 19 17:18:11 crc kubenswrapper[4792]: I0319 17:18:11.773649 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" event={"ID":"fbd720bf-e288-4d7c-8c10-4f61bdfee093","Type":"ContainerDied","Data":"5b010a80a5fcdb2abdd00f09caffae15dba0fb0c6c445505094a9c73b73194a6"} Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.056628 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-edb3-account-create-update-wv7rz"] Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.111288 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wqttw"] Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.138045 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-m4xgs"] Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.168958 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wqttw"] Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.229637 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-edb3-account-create-update-wv7rz"] Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.268031 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-m4xgs"] Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.290153 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.375889 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.459553 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.478393 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-inventory\") pod \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.478629 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xznv\" (UniqueName: \"kubernetes.io/projected/fbd720bf-e288-4d7c-8c10-4f61bdfee093-kube-api-access-4xznv\") pod \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.478662 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-ssh-key-openstack-edpm-ipam\") pod \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\" (UID: \"fbd720bf-e288-4d7c-8c10-4f61bdfee093\") " Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.483938 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd720bf-e288-4d7c-8c10-4f61bdfee093-kube-api-access-4xznv" (OuterVolumeSpecName: "kube-api-access-4xznv") pod "fbd720bf-e288-4d7c-8c10-4f61bdfee093" (UID: "fbd720bf-e288-4d7c-8c10-4f61bdfee093"). InnerVolumeSpecName "kube-api-access-4xznv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.560087 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbd720bf-e288-4d7c-8c10-4f61bdfee093" (UID: "fbd720bf-e288-4d7c-8c10-4f61bdfee093"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.563774 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ww6r"] Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.571203 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-inventory" (OuterVolumeSpecName: "inventory") pod "fbd720bf-e288-4d7c-8c10-4f61bdfee093" (UID: "fbd720bf-e288-4d7c-8c10-4f61bdfee093"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.582543 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xznv\" (UniqueName: \"kubernetes.io/projected/fbd720bf-e288-4d7c-8c10-4f61bdfee093-kube-api-access-4xznv\") on node \"crc\" DevicePath \"\"" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.582579 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.582592 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbd720bf-e288-4d7c-8c10-4f61bdfee093-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.761532 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf" path="/var/lib/kubelet/pods/15ee8330-8e1e-47e1-9cf0-d4ec9ae2cadf/volumes" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.762939 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57162945-9d65-4f62-b049-d8e61a06c508" path="/var/lib/kubelet/pods/57162945-9d65-4f62-b049-d8e61a06c508/volumes" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.763863 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f7cad5-612c-4946-8596-c7e5837465a1" path="/var/lib/kubelet/pods/62f7cad5-612c-4946-8596-c7e5837465a1/volumes" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.797985 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" event={"ID":"fbd720bf-e288-4d7c-8c10-4f61bdfee093","Type":"ContainerDied","Data":"5d77bd00ee76392520c17a15b4b99b3a099faa63a8115b554dfd3c947cf5e6f4"} Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.798021 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-52p9h" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.798038 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d77bd00ee76392520c17a15b4b99b3a099faa63a8115b554dfd3c947cf5e6f4" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.888810 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8"] Mar 19 17:18:13 crc kubenswrapper[4792]: E0319 17:18:13.889404 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd720bf-e288-4d7c-8c10-4f61bdfee093" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.889421 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd720bf-e288-4d7c-8c10-4f61bdfee093" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 19 17:18:13 crc kubenswrapper[4792]: E0319 17:18:13.889452 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01596d95-4a47-495b-8ba8-d62187e696ee" containerName="oc" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.889461 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="01596d95-4a47-495b-8ba8-d62187e696ee" containerName="oc" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.889689 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd720bf-e288-4d7c-8c10-4f61bdfee093" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.889705 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="01596d95-4a47-495b-8ba8-d62187e696ee" containerName="oc" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.890599 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.893182 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.893282 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.893413 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.893533 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.916168 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8"] Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.996062 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.996127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87jh6\" (UniqueName: \"kubernetes.io/projected/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-kube-api-access-87jh6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:13 crc kubenswrapper[4792]: I0319 17:18:13.996284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.098825 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.098970 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87jh6\" (UniqueName: \"kubernetes.io/projected/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-kube-api-access-87jh6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.099211 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.103962 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.103973 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.124647 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87jh6\" (UniqueName: \"kubernetes.io/projected/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-kube-api-access-87jh6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.212202 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.806902 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8ww6r" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="registry-server" containerID="cri-o://bcf0d30310f5467c50f4c960a522289cb71da6c7f7f8bdc64eb5655614c3cde7" gracePeriod=2 Mar 19 17:18:14 crc kubenswrapper[4792]: I0319 17:18:14.846600 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8"] Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.035252 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-82d7-account-create-update-mzkgh"] Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.049657 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-5k7gr"] Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.060473 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-819d-account-create-update-nw88m"] Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.069163 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-5k7gr"] Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.078252 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-82d7-account-create-update-mzkgh"] Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.087361 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-819d-account-create-update-nw88m"] Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.761357 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e" path="/var/lib/kubelet/pods/1ff08bfa-a548-4622-b61c-e3dfbc1e1e0e/volumes" Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.764259 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b6a98e-4345-443a-b896-a4b73cda3c34" path="/var/lib/kubelet/pods/a2b6a98e-4345-443a-b896-a4b73cda3c34/volumes" Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.765157 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4019769-bbd1-4dea-b732-315d331cb7c7" path="/var/lib/kubelet/pods/d4019769-bbd1-4dea-b732-315d331cb7c7/volumes" Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.818575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" event={"ID":"4a1da42a-e4a2-4624-a6e1-57f2d83d331c","Type":"ContainerStarted","Data":"3d3c7366cd2ad48867a638e78d0ce67eae2637d525e215853f818585ae6641ca"} Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.818615 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" event={"ID":"4a1da42a-e4a2-4624-a6e1-57f2d83d331c","Type":"ContainerStarted","Data":"efd77615be8465fd30f68987dba5a3acada5009da673cd673ea231eeb4f78d3a"} Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.825647 4792 generic.go:334] "Generic (PLEG): container finished" podID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerID="bcf0d30310f5467c50f4c960a522289cb71da6c7f7f8bdc64eb5655614c3cde7" exitCode=0 Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.825702 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ww6r" event={"ID":"781dfb1a-482d-453e-9963-60a57d9d37a1","Type":"ContainerDied","Data":"bcf0d30310f5467c50f4c960a522289cb71da6c7f7f8bdc64eb5655614c3cde7"} Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.825744 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ww6r" event={"ID":"781dfb1a-482d-453e-9963-60a57d9d37a1","Type":"ContainerDied","Data":"98803d25efca3a8bb8e384177cec7fc9f3ff3ad672a604390926d4ef3d5e6626"} Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.825793 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98803d25efca3a8bb8e384177cec7fc9f3ff3ad672a604390926d4ef3d5e6626" Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.843338 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.878131 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" podStartSLOduration=2.439612366 podStartE2EDuration="2.878110662s" podCreationTimestamp="2026-03-19 17:18:13 +0000 UTC" firstStartedPulling="2026-03-19 17:18:14.882943536 +0000 UTC m=+2258.029001076" lastFinishedPulling="2026-03-19 17:18:15.321441832 +0000 UTC m=+2258.467499372" observedRunningTime="2026-03-19 17:18:15.865368125 +0000 UTC m=+2259.011425675" watchObservedRunningTime="2026-03-19 17:18:15.878110662 +0000 UTC m=+2259.024168202" Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.944234 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-utilities\") pod \"781dfb1a-482d-453e-9963-60a57d9d37a1\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.944523 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krkbv\" (UniqueName: \"kubernetes.io/projected/781dfb1a-482d-453e-9963-60a57d9d37a1-kube-api-access-krkbv\") pod \"781dfb1a-482d-453e-9963-60a57d9d37a1\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.944643 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-catalog-content\") pod \"781dfb1a-482d-453e-9963-60a57d9d37a1\" (UID: \"781dfb1a-482d-453e-9963-60a57d9d37a1\") " Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.946287 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-utilities" (OuterVolumeSpecName: "utilities") pod "781dfb1a-482d-453e-9963-60a57d9d37a1" (UID: "781dfb1a-482d-453e-9963-60a57d9d37a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:18:15 crc kubenswrapper[4792]: I0319 17:18:15.953089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781dfb1a-482d-453e-9963-60a57d9d37a1-kube-api-access-krkbv" (OuterVolumeSpecName: "kube-api-access-krkbv") pod "781dfb1a-482d-453e-9963-60a57d9d37a1" (UID: "781dfb1a-482d-453e-9963-60a57d9d37a1"). InnerVolumeSpecName "kube-api-access-krkbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:18:16 crc kubenswrapper[4792]: I0319 17:18:16.048519 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:18:16 crc kubenswrapper[4792]: I0319 17:18:16.048770 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krkbv\" (UniqueName: \"kubernetes.io/projected/781dfb1a-482d-453e-9963-60a57d9d37a1-kube-api-access-krkbv\") on node \"crc\" DevicePath \"\"" Mar 19 17:18:16 crc kubenswrapper[4792]: I0319 17:18:16.122559 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "781dfb1a-482d-453e-9963-60a57d9d37a1" (UID: "781dfb1a-482d-453e-9963-60a57d9d37a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:18:16 crc kubenswrapper[4792]: I0319 17:18:16.151719 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/781dfb1a-482d-453e-9963-60a57d9d37a1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:18:16 crc kubenswrapper[4792]: I0319 17:18:16.837080 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ww6r" Mar 19 17:18:16 crc kubenswrapper[4792]: I0319 17:18:16.882174 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ww6r"] Mar 19 17:18:16 crc kubenswrapper[4792]: I0319 17:18:16.892593 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8ww6r"] Mar 19 17:18:17 crc kubenswrapper[4792]: I0319 17:18:17.754082 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" path="/var/lib/kubelet/pods/781dfb1a-482d-453e-9963-60a57d9d37a1/volumes" Mar 19 17:18:20 crc kubenswrapper[4792]: I0319 17:18:20.230657 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:18:20 crc kubenswrapper[4792]: I0319 17:18:20.230980 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:18:50 crc kubenswrapper[4792]: I0319 17:18:50.231514 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:18:50 crc kubenswrapper[4792]: I0319 17:18:50.232197 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:18:50 crc kubenswrapper[4792]: I0319 17:18:50.232448 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:18:50 crc kubenswrapper[4792]: I0319 17:18:50.233505 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:18:50 crc kubenswrapper[4792]: I0319 17:18:50.233599 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" gracePeriod=600 Mar 19 17:18:50 crc kubenswrapper[4792]: E0319 17:18:50.357615 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:18:51 crc kubenswrapper[4792]: I0319 17:18:51.212067 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" exitCode=0 Mar 19 17:18:51 crc kubenswrapper[4792]: I0319 17:18:51.212388 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6"} Mar 19 17:18:51 crc kubenswrapper[4792]: I0319 17:18:51.212448 4792 scope.go:117] "RemoveContainer" containerID="df04e1de6332da5f3e8a2d9492121d71dee7eaef8067de758696cf9c7212edb6" Mar 19 17:18:51 crc kubenswrapper[4792]: I0319 17:18:51.213402 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:18:51 crc kubenswrapper[4792]: E0319 17:18:51.213819 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:19:03 crc kubenswrapper[4792]: I0319 17:19:03.744262 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:19:03 crc kubenswrapper[4792]: E0319 17:19:03.749580 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:19:05 crc kubenswrapper[4792]: I0319 17:19:05.480278 4792 scope.go:117] "RemoveContainer" containerID="6d727b4b525c1e280aac89e31eff969fe6948396c520e8c92da3256d31ca8560" Mar 19 17:19:05 crc kubenswrapper[4792]: I0319 17:19:05.507656 4792 scope.go:117] "RemoveContainer" containerID="1ad8ae26ca5987196abf49447799b0988f8c599ccbbaa2a164bac2a595ecc728" Mar 19 17:19:05 crc kubenswrapper[4792]: I0319 17:19:05.608168 4792 scope.go:117] "RemoveContainer" containerID="58b852e0a02edff14382c2c1cb90b77ac07614a4caf9a8f97efe3f35a2adb4ef" Mar 19 17:19:05 crc kubenswrapper[4792]: I0319 17:19:05.673236 4792 scope.go:117] "RemoveContainer" containerID="bec299b309a1f6c92fd1747ec186a165f4ff17b80a449cc087da337d44397e8f" Mar 19 17:19:05 crc kubenswrapper[4792]: I0319 17:19:05.737707 4792 scope.go:117] "RemoveContainer" containerID="841baf1780d2a5eae61314bcb50a0569a901cba30c27e2fd47e09001f7ff2265" Mar 19 17:19:05 crc kubenswrapper[4792]: I0319 17:19:05.794970 4792 scope.go:117] "RemoveContainer" containerID="da3d53b83b5a6366b01c11d304031a2e82f0abbf868ec006fe4899dd73995944" Mar 19 17:19:05 crc kubenswrapper[4792]: I0319 17:19:05.845470 4792 scope.go:117] "RemoveContainer" containerID="a5c3fd2de6271e229fb4333296496d4bbac39b478c7e912bfe55fa9efb6e671b" Mar 19 17:19:06 crc kubenswrapper[4792]: I0319 17:19:06.051970 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v9mgf"] Mar 19 17:19:06 crc kubenswrapper[4792]: I0319 17:19:06.070455 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-v9mgf"] Mar 19 17:19:07 crc kubenswrapper[4792]: I0319 17:19:07.754175 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5027af97-8929-4efd-b9e0-47736ca10da2" path="/var/lib/kubelet/pods/5027af97-8929-4efd-b9e0-47736ca10da2/volumes" Mar 19 17:19:16 crc kubenswrapper[4792]: I0319 17:19:16.740111 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:19:16 crc kubenswrapper[4792]: E0319 17:19:16.741339 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:19:27 crc kubenswrapper[4792]: I0319 17:19:27.611982 4792 generic.go:334] "Generic (PLEG): container finished" podID="4a1da42a-e4a2-4624-a6e1-57f2d83d331c" containerID="3d3c7366cd2ad48867a638e78d0ce67eae2637d525e215853f818585ae6641ca" exitCode=0 Mar 19 17:19:27 crc kubenswrapper[4792]: I0319 17:19:27.612073 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" event={"ID":"4a1da42a-e4a2-4624-a6e1-57f2d83d331c","Type":"ContainerDied","Data":"3d3c7366cd2ad48867a638e78d0ce67eae2637d525e215853f818585ae6641ca"} Mar 19 17:19:27 crc kubenswrapper[4792]: I0319 17:19:27.748182 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:19:27 crc kubenswrapper[4792]: E0319 17:19:27.748582 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.137375 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.275673 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-ssh-key-openstack-edpm-ipam\") pod \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.275769 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87jh6\" (UniqueName: \"kubernetes.io/projected/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-kube-api-access-87jh6\") pod \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.275800 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-inventory\") pod \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\" (UID: \"4a1da42a-e4a2-4624-a6e1-57f2d83d331c\") " Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.281258 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-kube-api-access-87jh6" (OuterVolumeSpecName: "kube-api-access-87jh6") pod "4a1da42a-e4a2-4624-a6e1-57f2d83d331c" (UID: "4a1da42a-e4a2-4624-a6e1-57f2d83d331c"). InnerVolumeSpecName "kube-api-access-87jh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.304182 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a1da42a-e4a2-4624-a6e1-57f2d83d331c" (UID: "4a1da42a-e4a2-4624-a6e1-57f2d83d331c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.318577 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-inventory" (OuterVolumeSpecName: "inventory") pod "4a1da42a-e4a2-4624-a6e1-57f2d83d331c" (UID: "4a1da42a-e4a2-4624-a6e1-57f2d83d331c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.378862 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.378902 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87jh6\" (UniqueName: \"kubernetes.io/projected/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-kube-api-access-87jh6\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.378916 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a1da42a-e4a2-4624-a6e1-57f2d83d331c-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.638023 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" event={"ID":"4a1da42a-e4a2-4624-a6e1-57f2d83d331c","Type":"ContainerDied","Data":"efd77615be8465fd30f68987dba5a3acada5009da673cd673ea231eeb4f78d3a"} Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.638094 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efd77615be8465fd30f68987dba5a3acada5009da673cd673ea231eeb4f78d3a" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.638115 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.728183 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6"] Mar 19 17:19:29 crc kubenswrapper[4792]: E0319 17:19:29.728663 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="extract-utilities" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.728679 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="extract-utilities" Mar 19 17:19:29 crc kubenswrapper[4792]: E0319 17:19:29.728712 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="extract-content" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.728718 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="extract-content" Mar 19 17:19:29 crc kubenswrapper[4792]: E0319 17:19:29.728732 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1da42a-e4a2-4624-a6e1-57f2d83d331c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.728739 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1da42a-e4a2-4624-a6e1-57f2d83d331c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 19 17:19:29 crc kubenswrapper[4792]: E0319 17:19:29.728749 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="registry-server" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.728755 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="registry-server" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.728997 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="781dfb1a-482d-453e-9963-60a57d9d37a1" containerName="registry-server" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.729018 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1da42a-e4a2-4624-a6e1-57f2d83d331c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.730402 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.733550 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.733983 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.734240 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.734677 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.754503 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6"] Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.890939 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.890992 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl2v5\" (UniqueName: \"kubernetes.io/projected/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-kube-api-access-rl2v5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.891395 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.993033 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.993278 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl2v5\" (UniqueName: \"kubernetes.io/projected/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-kube-api-access-rl2v5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.993383 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:29 crc kubenswrapper[4792]: I0319 17:19:29.998470 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:30 crc kubenswrapper[4792]: I0319 17:19:30.002350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:30 crc kubenswrapper[4792]: I0319 17:19:30.011634 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl2v5\" (UniqueName: \"kubernetes.io/projected/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-kube-api-access-rl2v5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-krwg6\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:30 crc kubenswrapper[4792]: I0319 17:19:30.054010 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:30 crc kubenswrapper[4792]: I0319 17:19:30.636618 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6"] Mar 19 17:19:30 crc kubenswrapper[4792]: I0319 17:19:30.652261 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" event={"ID":"5c39cf60-90bf-4a71-99ca-1ce29cf5450d","Type":"ContainerStarted","Data":"27366099d5613457779e04fafcaaf6041a5f1f2206852cd50f9fbdafecde2960"} Mar 19 17:19:31 crc kubenswrapper[4792]: I0319 17:19:31.051342 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-3376-account-create-update-ps95w"] Mar 19 17:19:31 crc kubenswrapper[4792]: I0319 17:19:31.062481 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-wsz2l"] Mar 19 17:19:31 crc kubenswrapper[4792]: I0319 17:19:31.072882 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-3376-account-create-update-ps95w"] Mar 19 17:19:31 crc kubenswrapper[4792]: I0319 17:19:31.084770 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-wsz2l"] Mar 19 17:19:31 crc kubenswrapper[4792]: I0319 17:19:31.662857 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" event={"ID":"5c39cf60-90bf-4a71-99ca-1ce29cf5450d","Type":"ContainerStarted","Data":"2ccd76e58af35e633d413ef65b0afa116c80b9d24abb004ab8d85e517d955ddc"} Mar 19 17:19:31 crc kubenswrapper[4792]: I0319 17:19:31.690266 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" podStartSLOduration=2.222219014 podStartE2EDuration="2.690246372s" podCreationTimestamp="2026-03-19 17:19:29 +0000 UTC" firstStartedPulling="2026-03-19 17:19:30.642487876 +0000 UTC m=+2333.788545416" lastFinishedPulling="2026-03-19 17:19:31.110515234 +0000 UTC m=+2334.256572774" observedRunningTime="2026-03-19 17:19:31.682910576 +0000 UTC m=+2334.828968126" watchObservedRunningTime="2026-03-19 17:19:31.690246372 +0000 UTC m=+2334.836303912" Mar 19 17:19:31 crc kubenswrapper[4792]: I0319 17:19:31.763172 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c576dde9-2cc3-4403-a106-7c9cb996287e" path="/var/lib/kubelet/pods/c576dde9-2cc3-4403-a106-7c9cb996287e/volumes" Mar 19 17:19:31 crc kubenswrapper[4792]: I0319 17:19:31.765020 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd79db6b-b7c7-4d8f-9b7c-c853501d6706" path="/var/lib/kubelet/pods/dd79db6b-b7c7-4d8f-9b7c-c853501d6706/volumes" Mar 19 17:19:32 crc kubenswrapper[4792]: I0319 17:19:32.918027 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kpxl6"] Mar 19 17:19:32 crc kubenswrapper[4792]: I0319 17:19:32.922896 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:32 crc kubenswrapper[4792]: I0319 17:19:32.934498 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpxl6"] Mar 19 17:19:32 crc kubenswrapper[4792]: I0319 17:19:32.980065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-utilities\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:32 crc kubenswrapper[4792]: I0319 17:19:32.980216 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnxwg\" (UniqueName: \"kubernetes.io/projected/3137dc1c-5818-47ab-845b-d885b820fc7b-kube-api-access-tnxwg\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:32 crc kubenswrapper[4792]: I0319 17:19:32.980315 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-catalog-content\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:33 crc kubenswrapper[4792]: I0319 17:19:33.082939 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-utilities\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:33 crc kubenswrapper[4792]: I0319 17:19:33.083289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnxwg\" (UniqueName: \"kubernetes.io/projected/3137dc1c-5818-47ab-845b-d885b820fc7b-kube-api-access-tnxwg\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:33 crc kubenswrapper[4792]: I0319 17:19:33.083424 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-catalog-content\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:33 crc kubenswrapper[4792]: I0319 17:19:33.083474 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-utilities\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:33 crc kubenswrapper[4792]: I0319 17:19:33.083781 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-catalog-content\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:33 crc kubenswrapper[4792]: I0319 17:19:33.102320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnxwg\" (UniqueName: \"kubernetes.io/projected/3137dc1c-5818-47ab-845b-d885b820fc7b-kube-api-access-tnxwg\") pod \"redhat-marketplace-kpxl6\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:33 crc kubenswrapper[4792]: I0319 17:19:33.251129 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:33 crc kubenswrapper[4792]: I0319 17:19:33.795253 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpxl6"] Mar 19 17:19:33 crc kubenswrapper[4792]: W0319 17:19:33.804985 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3137dc1c_5818_47ab_845b_d885b820fc7b.slice/crio-9600ca7e8decf4d8aaf48292abc8d24a16534020be09905dcfa7ed5d905f7049 WatchSource:0}: Error finding container 9600ca7e8decf4d8aaf48292abc8d24a16534020be09905dcfa7ed5d905f7049: Status 404 returned error can't find the container with id 9600ca7e8decf4d8aaf48292abc8d24a16534020be09905dcfa7ed5d905f7049 Mar 19 17:19:34 crc kubenswrapper[4792]: I0319 17:19:34.696061 4792 generic.go:334] "Generic (PLEG): container finished" podID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerID="49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e" exitCode=0 Mar 19 17:19:34 crc kubenswrapper[4792]: I0319 17:19:34.696127 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpxl6" event={"ID":"3137dc1c-5818-47ab-845b-d885b820fc7b","Type":"ContainerDied","Data":"49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e"} Mar 19 17:19:34 crc kubenswrapper[4792]: I0319 17:19:34.696389 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpxl6" event={"ID":"3137dc1c-5818-47ab-845b-d885b820fc7b","Type":"ContainerStarted","Data":"9600ca7e8decf4d8aaf48292abc8d24a16534020be09905dcfa7ed5d905f7049"} Mar 19 17:19:36 crc kubenswrapper[4792]: I0319 17:19:36.718317 4792 generic.go:334] "Generic (PLEG): container finished" podID="5c39cf60-90bf-4a71-99ca-1ce29cf5450d" containerID="2ccd76e58af35e633d413ef65b0afa116c80b9d24abb004ab8d85e517d955ddc" exitCode=0 Mar 19 17:19:36 crc kubenswrapper[4792]: I0319 17:19:36.718478 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" event={"ID":"5c39cf60-90bf-4a71-99ca-1ce29cf5450d","Type":"ContainerDied","Data":"2ccd76e58af35e633d413ef65b0afa116c80b9d24abb004ab8d85e517d955ddc"} Mar 19 17:19:36 crc kubenswrapper[4792]: I0319 17:19:36.721866 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpxl6" event={"ID":"3137dc1c-5818-47ab-845b-d885b820fc7b","Type":"ContainerStarted","Data":"0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720"} Mar 19 17:19:37 crc kubenswrapper[4792]: I0319 17:19:37.736153 4792 generic.go:334] "Generic (PLEG): container finished" podID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerID="0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720" exitCode=0 Mar 19 17:19:37 crc kubenswrapper[4792]: I0319 17:19:37.736213 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpxl6" event={"ID":"3137dc1c-5818-47ab-845b-d885b820fc7b","Type":"ContainerDied","Data":"0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720"} Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.294132 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.418056 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl2v5\" (UniqueName: \"kubernetes.io/projected/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-kube-api-access-rl2v5\") pod \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.418208 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-inventory\") pod \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.418434 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-ssh-key-openstack-edpm-ipam\") pod \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\" (UID: \"5c39cf60-90bf-4a71-99ca-1ce29cf5450d\") " Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.425216 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-kube-api-access-rl2v5" (OuterVolumeSpecName: "kube-api-access-rl2v5") pod "5c39cf60-90bf-4a71-99ca-1ce29cf5450d" (UID: "5c39cf60-90bf-4a71-99ca-1ce29cf5450d"). InnerVolumeSpecName "kube-api-access-rl2v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.453374 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-inventory" (OuterVolumeSpecName: "inventory") pod "5c39cf60-90bf-4a71-99ca-1ce29cf5450d" (UID: "5c39cf60-90bf-4a71-99ca-1ce29cf5450d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.458996 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c39cf60-90bf-4a71-99ca-1ce29cf5450d" (UID: "5c39cf60-90bf-4a71-99ca-1ce29cf5450d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.521077 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl2v5\" (UniqueName: \"kubernetes.io/projected/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-kube-api-access-rl2v5\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.521284 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.521382 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c39cf60-90bf-4a71-99ca-1ce29cf5450d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.746736 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.746754 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-krwg6" event={"ID":"5c39cf60-90bf-4a71-99ca-1ce29cf5450d","Type":"ContainerDied","Data":"27366099d5613457779e04fafcaaf6041a5f1f2206852cd50f9fbdafecde2960"} Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.746795 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27366099d5613457779e04fafcaaf6041a5f1f2206852cd50f9fbdafecde2960" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.749692 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpxl6" event={"ID":"3137dc1c-5818-47ab-845b-d885b820fc7b","Type":"ContainerStarted","Data":"9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c"} Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.797051 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kpxl6" podStartSLOduration=3.3783919190000002 podStartE2EDuration="6.79702476s" podCreationTimestamp="2026-03-19 17:19:32 +0000 UTC" firstStartedPulling="2026-03-19 17:19:34.698285862 +0000 UTC m=+2337.844343402" lastFinishedPulling="2026-03-19 17:19:38.116918703 +0000 UTC m=+2341.262976243" observedRunningTime="2026-03-19 17:19:38.771618344 +0000 UTC m=+2341.917675894" watchObservedRunningTime="2026-03-19 17:19:38.79702476 +0000 UTC m=+2341.943082300" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.820907 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8"] Mar 19 17:19:38 crc kubenswrapper[4792]: E0319 17:19:38.821711 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c39cf60-90bf-4a71-99ca-1ce29cf5450d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.821730 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c39cf60-90bf-4a71-99ca-1ce29cf5450d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.822009 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c39cf60-90bf-4a71-99ca-1ce29cf5450d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.822855 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.824589 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.828388 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.828483 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.828830 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.836184 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8"] Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.932220 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.932269 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29hsp\" (UniqueName: \"kubernetes.io/projected/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-kube-api-access-29hsp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:38 crc kubenswrapper[4792]: I0319 17:19:38.932322 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.035247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.035722 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.035759 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29hsp\" (UniqueName: \"kubernetes.io/projected/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-kube-api-access-29hsp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.038956 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.039261 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.052313 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29hsp\" (UniqueName: \"kubernetes.io/projected/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-kube-api-access-29hsp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mf8b8\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.144538 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.734195 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8"] Mar 19 17:19:39 crc kubenswrapper[4792]: I0319 17:19:39.766700 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" event={"ID":"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7","Type":"ContainerStarted","Data":"9e2c2f6f67720644827ee521b6f61ab61763473c2fdcc5554e8a0ea512be3987"} Mar 19 17:19:40 crc kubenswrapper[4792]: I0319 17:19:40.740861 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:19:40 crc kubenswrapper[4792]: E0319 17:19:40.743113 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:19:40 crc kubenswrapper[4792]: I0319 17:19:40.781384 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" event={"ID":"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7","Type":"ContainerStarted","Data":"70e02e40d4bfcabba28231ac0d05aeb5abd0bbb74249f3c95b03189b00d37eeb"} Mar 19 17:19:40 crc kubenswrapper[4792]: I0319 17:19:40.807272 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" podStartSLOduration=2.341933788 podStartE2EDuration="2.807254483s" podCreationTimestamp="2026-03-19 17:19:38 +0000 UTC" firstStartedPulling="2026-03-19 17:19:39.737445711 +0000 UTC m=+2342.883503251" lastFinishedPulling="2026-03-19 17:19:40.202766406 +0000 UTC m=+2343.348823946" observedRunningTime="2026-03-19 17:19:40.798284515 +0000 UTC m=+2343.944342055" watchObservedRunningTime="2026-03-19 17:19:40.807254483 +0000 UTC m=+2343.953312023" Mar 19 17:19:43 crc kubenswrapper[4792]: I0319 17:19:43.251542 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:43 crc kubenswrapper[4792]: I0319 17:19:43.252161 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:43 crc kubenswrapper[4792]: I0319 17:19:43.300952 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:43 crc kubenswrapper[4792]: I0319 17:19:43.885548 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:43 crc kubenswrapper[4792]: I0319 17:19:43.935780 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpxl6"] Mar 19 17:19:44 crc kubenswrapper[4792]: I0319 17:19:44.027599 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mfkfn"] Mar 19 17:19:44 crc kubenswrapper[4792]: I0319 17:19:44.042851 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mfkfn"] Mar 19 17:19:45 crc kubenswrapper[4792]: I0319 17:19:45.751881 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c27def-27bf-4b67-abcf-428ff60a77bd" path="/var/lib/kubelet/pods/41c27def-27bf-4b67-abcf-428ff60a77bd/volumes" Mar 19 17:19:45 crc kubenswrapper[4792]: I0319 17:19:45.836973 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kpxl6" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerName="registry-server" containerID="cri-o://9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c" gracePeriod=2 Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.331101 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.425352 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-utilities\") pod \"3137dc1c-5818-47ab-845b-d885b820fc7b\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.425545 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-catalog-content\") pod \"3137dc1c-5818-47ab-845b-d885b820fc7b\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.425679 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnxwg\" (UniqueName: \"kubernetes.io/projected/3137dc1c-5818-47ab-845b-d885b820fc7b-kube-api-access-tnxwg\") pod \"3137dc1c-5818-47ab-845b-d885b820fc7b\" (UID: \"3137dc1c-5818-47ab-845b-d885b820fc7b\") " Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.426906 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-utilities" (OuterVolumeSpecName: "utilities") pod "3137dc1c-5818-47ab-845b-d885b820fc7b" (UID: "3137dc1c-5818-47ab-845b-d885b820fc7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.430375 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3137dc1c-5818-47ab-845b-d885b820fc7b-kube-api-access-tnxwg" (OuterVolumeSpecName: "kube-api-access-tnxwg") pod "3137dc1c-5818-47ab-845b-d885b820fc7b" (UID: "3137dc1c-5818-47ab-845b-d885b820fc7b"). InnerVolumeSpecName "kube-api-access-tnxwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.453549 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3137dc1c-5818-47ab-845b-d885b820fc7b" (UID: "3137dc1c-5818-47ab-845b-d885b820fc7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.528620 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.528661 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnxwg\" (UniqueName: \"kubernetes.io/projected/3137dc1c-5818-47ab-845b-d885b820fc7b-kube-api-access-tnxwg\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.528677 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3137dc1c-5818-47ab-845b-d885b820fc7b-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.851492 4792 generic.go:334] "Generic (PLEG): container finished" podID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerID="9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c" exitCode=0 Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.851555 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpxl6" event={"ID":"3137dc1c-5818-47ab-845b-d885b820fc7b","Type":"ContainerDied","Data":"9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c"} Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.851595 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpxl6" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.851630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpxl6" event={"ID":"3137dc1c-5818-47ab-845b-d885b820fc7b","Type":"ContainerDied","Data":"9600ca7e8decf4d8aaf48292abc8d24a16534020be09905dcfa7ed5d905f7049"} Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.851665 4792 scope.go:117] "RemoveContainer" containerID="9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.893202 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpxl6"] Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.893519 4792 scope.go:117] "RemoveContainer" containerID="0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.921927 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpxl6"] Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.935132 4792 scope.go:117] "RemoveContainer" containerID="49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.968094 4792 scope.go:117] "RemoveContainer" containerID="9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c" Mar 19 17:19:46 crc kubenswrapper[4792]: E0319 17:19:46.968730 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c\": container with ID starting with 9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c not found: ID does not exist" containerID="9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.968792 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c"} err="failed to get container status \"9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c\": rpc error: code = NotFound desc = could not find container \"9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c\": container with ID starting with 9cd3935bf5acda49aa804c4f40d61743913ca38cf03a4750e9be41991b2fc30c not found: ID does not exist" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.968833 4792 scope.go:117] "RemoveContainer" containerID="0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720" Mar 19 17:19:46 crc kubenswrapper[4792]: E0319 17:19:46.969302 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720\": container with ID starting with 0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720 not found: ID does not exist" containerID="0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.969381 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720"} err="failed to get container status \"0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720\": rpc error: code = NotFound desc = could not find container \"0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720\": container with ID starting with 0de54e614e54c886e01deb562f8c4e022f7acaf91e112cbcb2a2ef832f172720 not found: ID does not exist" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.969416 4792 scope.go:117] "RemoveContainer" containerID="49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e" Mar 19 17:19:46 crc kubenswrapper[4792]: E0319 17:19:46.970009 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e\": container with ID starting with 49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e not found: ID does not exist" containerID="49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e" Mar 19 17:19:46 crc kubenswrapper[4792]: I0319 17:19:46.970049 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e"} err="failed to get container status \"49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e\": rpc error: code = NotFound desc = could not find container \"49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e\": container with ID starting with 49bbe032c82487329ba0dbf0eaf3fbd91728e3c1fbd77ba5ecfda4719eb05b9e not found: ID does not exist" Mar 19 17:19:47 crc kubenswrapper[4792]: I0319 17:19:47.041516 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvqdf"] Mar 19 17:19:47 crc kubenswrapper[4792]: I0319 17:19:47.052998 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvqdf"] Mar 19 17:19:47 crc kubenswrapper[4792]: I0319 17:19:47.759791 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbd4aa3-ab8f-496f-8c97-d99869f2c91a" path="/var/lib/kubelet/pods/2bbd4aa3-ab8f-496f-8c97-d99869f2c91a/volumes" Mar 19 17:19:47 crc kubenswrapper[4792]: I0319 17:19:47.761258 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" path="/var/lib/kubelet/pods/3137dc1c-5818-47ab-845b-d885b820fc7b/volumes" Mar 19 17:19:48 crc kubenswrapper[4792]: I0319 17:19:48.954880 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwh2x"] Mar 19 17:19:48 crc kubenswrapper[4792]: E0319 17:19:48.956831 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerName="extract-content" Mar 19 17:19:48 crc kubenswrapper[4792]: I0319 17:19:48.956958 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerName="extract-content" Mar 19 17:19:48 crc kubenswrapper[4792]: E0319 17:19:48.957057 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerName="extract-utilities" Mar 19 17:19:48 crc kubenswrapper[4792]: I0319 17:19:48.957131 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerName="extract-utilities" Mar 19 17:19:48 crc kubenswrapper[4792]: E0319 17:19:48.957207 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerName="registry-server" Mar 19 17:19:48 crc kubenswrapper[4792]: I0319 17:19:48.957256 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerName="registry-server" Mar 19 17:19:48 crc kubenswrapper[4792]: I0319 17:19:48.957709 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3137dc1c-5818-47ab-845b-d885b820fc7b" containerName="registry-server" Mar 19 17:19:48 crc kubenswrapper[4792]: I0319 17:19:48.959937 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:48 crc kubenswrapper[4792]: I0319 17:19:48.985119 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwh2x"] Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.087670 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22kbr\" (UniqueName: \"kubernetes.io/projected/9852dbf1-7eee-465e-8be7-d477a3efdc8c-kube-api-access-22kbr\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.087736 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-catalog-content\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.088263 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-utilities\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.190761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-utilities\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.190920 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22kbr\" (UniqueName: \"kubernetes.io/projected/9852dbf1-7eee-465e-8be7-d477a3efdc8c-kube-api-access-22kbr\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.190959 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-catalog-content\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.191624 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-catalog-content\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.191926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-utilities\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.220249 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22kbr\" (UniqueName: \"kubernetes.io/projected/9852dbf1-7eee-465e-8be7-d477a3efdc8c-kube-api-access-22kbr\") pod \"certified-operators-vwh2x\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.279706 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.881285 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwh2x"] Mar 19 17:19:49 crc kubenswrapper[4792]: I0319 17:19:49.896684 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwh2x" event={"ID":"9852dbf1-7eee-465e-8be7-d477a3efdc8c","Type":"ContainerStarted","Data":"cbbf336ae3cad67764809f3d8fe51aeddef8988c4708a262083f5b1a935cc133"} Mar 19 17:19:50 crc kubenswrapper[4792]: I0319 17:19:50.906743 4792 generic.go:334] "Generic (PLEG): container finished" podID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerID="5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a" exitCode=0 Mar 19 17:19:50 crc kubenswrapper[4792]: I0319 17:19:50.906879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwh2x" event={"ID":"9852dbf1-7eee-465e-8be7-d477a3efdc8c","Type":"ContainerDied","Data":"5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a"} Mar 19 17:19:51 crc kubenswrapper[4792]: I0319 17:19:51.918444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwh2x" event={"ID":"9852dbf1-7eee-465e-8be7-d477a3efdc8c","Type":"ContainerStarted","Data":"ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7"} Mar 19 17:19:52 crc kubenswrapper[4792]: I0319 17:19:52.930817 4792 generic.go:334] "Generic (PLEG): container finished" podID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerID="ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7" exitCode=0 Mar 19 17:19:52 crc kubenswrapper[4792]: I0319 17:19:52.930876 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwh2x" event={"ID":"9852dbf1-7eee-465e-8be7-d477a3efdc8c","Type":"ContainerDied","Data":"ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7"} Mar 19 17:19:53 crc kubenswrapper[4792]: I0319 17:19:53.947395 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwh2x" event={"ID":"9852dbf1-7eee-465e-8be7-d477a3efdc8c","Type":"ContainerStarted","Data":"9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107"} Mar 19 17:19:53 crc kubenswrapper[4792]: I0319 17:19:53.975203 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwh2x" podStartSLOduration=3.478912481 podStartE2EDuration="5.97518176s" podCreationTimestamp="2026-03-19 17:19:48 +0000 UTC" firstStartedPulling="2026-03-19 17:19:50.910687049 +0000 UTC m=+2354.056744589" lastFinishedPulling="2026-03-19 17:19:53.406956328 +0000 UTC m=+2356.553013868" observedRunningTime="2026-03-19 17:19:53.966431918 +0000 UTC m=+2357.112489498" watchObservedRunningTime="2026-03-19 17:19:53.97518176 +0000 UTC m=+2357.121239300" Mar 19 17:19:54 crc kubenswrapper[4792]: I0319 17:19:54.740011 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:19:54 crc kubenswrapper[4792]: E0319 17:19:54.740366 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:19:59 crc kubenswrapper[4792]: I0319 17:19:59.280261 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:59 crc kubenswrapper[4792]: I0319 17:19:59.280779 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:19:59 crc kubenswrapper[4792]: I0319 17:19:59.345944 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.063933 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.116659 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwh2x"] Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.146575 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565680-xkcn4"] Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.148590 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.151805 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.151969 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.152123 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.159307 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565680-xkcn4"] Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.265132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhwf\" (UniqueName: \"kubernetes.io/projected/ec7cffd0-73b4-4d98-9ca5-a9884daad11f-kube-api-access-thhwf\") pod \"auto-csr-approver-29565680-xkcn4\" (UID: \"ec7cffd0-73b4-4d98-9ca5-a9884daad11f\") " pod="openshift-infra/auto-csr-approver-29565680-xkcn4" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.368133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thhwf\" (UniqueName: \"kubernetes.io/projected/ec7cffd0-73b4-4d98-9ca5-a9884daad11f-kube-api-access-thhwf\") pod \"auto-csr-approver-29565680-xkcn4\" (UID: \"ec7cffd0-73b4-4d98-9ca5-a9884daad11f\") " pod="openshift-infra/auto-csr-approver-29565680-xkcn4" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.398974 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thhwf\" (UniqueName: \"kubernetes.io/projected/ec7cffd0-73b4-4d98-9ca5-a9884daad11f-kube-api-access-thhwf\") pod \"auto-csr-approver-29565680-xkcn4\" (UID: \"ec7cffd0-73b4-4d98-9ca5-a9884daad11f\") " pod="openshift-infra/auto-csr-approver-29565680-xkcn4" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.471192 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" Mar 19 17:20:00 crc kubenswrapper[4792]: I0319 17:20:00.961751 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565680-xkcn4"] Mar 19 17:20:01 crc kubenswrapper[4792]: I0319 17:20:01.033578 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" event={"ID":"ec7cffd0-73b4-4d98-9ca5-a9884daad11f","Type":"ContainerStarted","Data":"36bb0656b9c985b79fd6e4621403623ff73baace1b8f25b61ea9e1f4a1a43e79"} Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.042410 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwh2x" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerName="registry-server" containerID="cri-o://9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107" gracePeriod=2 Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.569370 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.640049 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22kbr\" (UniqueName: \"kubernetes.io/projected/9852dbf1-7eee-465e-8be7-d477a3efdc8c-kube-api-access-22kbr\") pod \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.640181 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-utilities\") pod \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.640890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-utilities" (OuterVolumeSpecName: "utilities") pod "9852dbf1-7eee-465e-8be7-d477a3efdc8c" (UID: "9852dbf1-7eee-465e-8be7-d477a3efdc8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.641279 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-catalog-content\") pod \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\" (UID: \"9852dbf1-7eee-465e-8be7-d477a3efdc8c\") " Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.641860 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.653988 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9852dbf1-7eee-465e-8be7-d477a3efdc8c-kube-api-access-22kbr" (OuterVolumeSpecName: "kube-api-access-22kbr") pod "9852dbf1-7eee-465e-8be7-d477a3efdc8c" (UID: "9852dbf1-7eee-465e-8be7-d477a3efdc8c"). InnerVolumeSpecName "kube-api-access-22kbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.743870 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22kbr\" (UniqueName: \"kubernetes.io/projected/9852dbf1-7eee-465e-8be7-d477a3efdc8c-kube-api-access-22kbr\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.865071 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9852dbf1-7eee-465e-8be7-d477a3efdc8c" (UID: "9852dbf1-7eee-465e-8be7-d477a3efdc8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:20:02 crc kubenswrapper[4792]: I0319 17:20:02.949584 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9852dbf1-7eee-465e-8be7-d477a3efdc8c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.074474 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" event={"ID":"ec7cffd0-73b4-4d98-9ca5-a9884daad11f","Type":"ContainerStarted","Data":"0d2a77f16e50f698c8000bb2b69fbcccffe158169cb64ba9f3fe23c1d2f7535d"} Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.077620 4792 generic.go:334] "Generic (PLEG): container finished" podID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerID="9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107" exitCode=0 Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.077680 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwh2x" event={"ID":"9852dbf1-7eee-465e-8be7-d477a3efdc8c","Type":"ContainerDied","Data":"9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107"} Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.077703 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwh2x" event={"ID":"9852dbf1-7eee-465e-8be7-d477a3efdc8c","Type":"ContainerDied","Data":"cbbf336ae3cad67764809f3d8fe51aeddef8988c4708a262083f5b1a935cc133"} Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.077743 4792 scope.go:117] "RemoveContainer" containerID="9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.077969 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwh2x" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.096880 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" podStartSLOduration=1.4663144799999999 podStartE2EDuration="3.096863936s" podCreationTimestamp="2026-03-19 17:20:00 +0000 UTC" firstStartedPulling="2026-03-19 17:20:00.96812016 +0000 UTC m=+2364.114177700" lastFinishedPulling="2026-03-19 17:20:02.598669616 +0000 UTC m=+2365.744727156" observedRunningTime="2026-03-19 17:20:03.092643764 +0000 UTC m=+2366.238701304" watchObservedRunningTime="2026-03-19 17:20:03.096863936 +0000 UTC m=+2366.242921476" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.113562 4792 scope.go:117] "RemoveContainer" containerID="ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.119250 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwh2x"] Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.129793 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwh2x"] Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.139963 4792 scope.go:117] "RemoveContainer" containerID="5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.254171 4792 scope.go:117] "RemoveContainer" containerID="9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107" Mar 19 17:20:03 crc kubenswrapper[4792]: E0319 17:20:03.254569 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107\": container with ID starting with 9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107 not found: ID does not exist" containerID="9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.254609 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107"} err="failed to get container status \"9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107\": rpc error: code = NotFound desc = could not find container \"9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107\": container with ID starting with 9c3be751776fc792138ff99f198112c5e4f4bf41f1a72f77b28c085961baf107 not found: ID does not exist" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.254635 4792 scope.go:117] "RemoveContainer" containerID="ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7" Mar 19 17:20:03 crc kubenswrapper[4792]: E0319 17:20:03.254815 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7\": container with ID starting with ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7 not found: ID does not exist" containerID="ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.254862 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7"} err="failed to get container status \"ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7\": rpc error: code = NotFound desc = could not find container \"ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7\": container with ID starting with ad1751bab2cb2a4735dab8eaddfa6be23cade82860fa90ca80415c98bc407ad7 not found: ID does not exist" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.254878 4792 scope.go:117] "RemoveContainer" containerID="5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a" Mar 19 17:20:03 crc kubenswrapper[4792]: E0319 17:20:03.255075 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a\": container with ID starting with 5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a not found: ID does not exist" containerID="5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.255099 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a"} err="failed to get container status \"5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a\": rpc error: code = NotFound desc = could not find container \"5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a\": container with ID starting with 5dca2fd122d0858019c4f797a8541ff5ce8c6998a1df96aa1d2877d073b54b3a not found: ID does not exist" Mar 19 17:20:03 crc kubenswrapper[4792]: I0319 17:20:03.757330 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" path="/var/lib/kubelet/pods/9852dbf1-7eee-465e-8be7-d477a3efdc8c/volumes" Mar 19 17:20:04 crc kubenswrapper[4792]: I0319 17:20:04.113440 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec7cffd0-73b4-4d98-9ca5-a9884daad11f" containerID="0d2a77f16e50f698c8000bb2b69fbcccffe158169cb64ba9f3fe23c1d2f7535d" exitCode=0 Mar 19 17:20:04 crc kubenswrapper[4792]: I0319 17:20:04.113519 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" event={"ID":"ec7cffd0-73b4-4d98-9ca5-a9884daad11f","Type":"ContainerDied","Data":"0d2a77f16e50f698c8000bb2b69fbcccffe158169cb64ba9f3fe23c1d2f7535d"} Mar 19 17:20:05 crc kubenswrapper[4792]: I0319 17:20:05.555813 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" Mar 19 17:20:05 crc kubenswrapper[4792]: I0319 17:20:05.713405 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thhwf\" (UniqueName: \"kubernetes.io/projected/ec7cffd0-73b4-4d98-9ca5-a9884daad11f-kube-api-access-thhwf\") pod \"ec7cffd0-73b4-4d98-9ca5-a9884daad11f\" (UID: \"ec7cffd0-73b4-4d98-9ca5-a9884daad11f\") " Mar 19 17:20:05 crc kubenswrapper[4792]: I0319 17:20:05.722574 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7cffd0-73b4-4d98-9ca5-a9884daad11f-kube-api-access-thhwf" (OuterVolumeSpecName: "kube-api-access-thhwf") pod "ec7cffd0-73b4-4d98-9ca5-a9884daad11f" (UID: "ec7cffd0-73b4-4d98-9ca5-a9884daad11f"). InnerVolumeSpecName "kube-api-access-thhwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:20:05 crc kubenswrapper[4792]: I0319 17:20:05.816572 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thhwf\" (UniqueName: \"kubernetes.io/projected/ec7cffd0-73b4-4d98-9ca5-a9884daad11f-kube-api-access-thhwf\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.075427 4792 scope.go:117] "RemoveContainer" containerID="e86207d7420fea5580fe6c0e95d73b0f43bd8c149ef25c3dbad618108428a998" Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.113580 4792 scope.go:117] "RemoveContainer" containerID="fa80c62ab82397f6bdd3be4d2c052621b256d0c74c90521daa5c40d85d375f1a" Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.144755 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" event={"ID":"ec7cffd0-73b4-4d98-9ca5-a9884daad11f","Type":"ContainerDied","Data":"36bb0656b9c985b79fd6e4621403623ff73baace1b8f25b61ea9e1f4a1a43e79"} Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.144797 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36bb0656b9c985b79fd6e4621403623ff73baace1b8f25b61ea9e1f4a1a43e79" Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.144903 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565680-xkcn4" Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.170480 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565674-s2pph"] Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.171580 4792 scope.go:117] "RemoveContainer" containerID="3a25d023da25c8874e36a26c96eb34acea89b96666086b5919a1d654552919d2" Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.185126 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565674-s2pph"] Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.213180 4792 scope.go:117] "RemoveContainer" containerID="93201ff122fb160c37abd2ab1cae25945a9f9e171b51e8ce13ba109374c920e7" Mar 19 17:20:06 crc kubenswrapper[4792]: I0319 17:20:06.235976 4792 scope.go:117] "RemoveContainer" containerID="54237895011b42f1e1ef761f2c51fd652049608d60e7ed9708daa7fcc1061f55" Mar 19 17:20:07 crc kubenswrapper[4792]: I0319 17:20:07.748136 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:20:07 crc kubenswrapper[4792]: E0319 17:20:07.748745 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:20:07 crc kubenswrapper[4792]: I0319 17:20:07.753953 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f4785ef-7e44-4cbc-8d9c-d96670a13000" path="/var/lib/kubelet/pods/6f4785ef-7e44-4cbc-8d9c-d96670a13000/volumes" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.712816 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5b7v4"] Mar 19 17:20:12 crc kubenswrapper[4792]: E0319 17:20:12.715289 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7cffd0-73b4-4d98-9ca5-a9884daad11f" containerName="oc" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.715374 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7cffd0-73b4-4d98-9ca5-a9884daad11f" containerName="oc" Mar 19 17:20:12 crc kubenswrapper[4792]: E0319 17:20:12.715436 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerName="registry-server" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.715492 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerName="registry-server" Mar 19 17:20:12 crc kubenswrapper[4792]: E0319 17:20:12.715570 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerName="extract-utilities" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.715629 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerName="extract-utilities" Mar 19 17:20:12 crc kubenswrapper[4792]: E0319 17:20:12.715713 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerName="extract-content" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.715770 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerName="extract-content" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.716141 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7cffd0-73b4-4d98-9ca5-a9884daad11f" containerName="oc" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.716223 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9852dbf1-7eee-465e-8be7-d477a3efdc8c" containerName="registry-server" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.717957 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.728051 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5b7v4"] Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.895625 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqwrn\" (UniqueName: \"kubernetes.io/projected/a85cc9a6-90fb-435c-9d7f-d2b93a344853-kube-api-access-vqwrn\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.895987 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-catalog-content\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.896701 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-utilities\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.999035 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-utilities\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.999161 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqwrn\" (UniqueName: \"kubernetes.io/projected/a85cc9a6-90fb-435c-9d7f-d2b93a344853-kube-api-access-vqwrn\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.999254 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-catalog-content\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.999592 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-utilities\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:12 crc kubenswrapper[4792]: I0319 17:20:12.999763 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-catalog-content\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:13 crc kubenswrapper[4792]: I0319 17:20:13.028528 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqwrn\" (UniqueName: \"kubernetes.io/projected/a85cc9a6-90fb-435c-9d7f-d2b93a344853-kube-api-access-vqwrn\") pod \"community-operators-5b7v4\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:13 crc kubenswrapper[4792]: I0319 17:20:13.042571 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:13 crc kubenswrapper[4792]: I0319 17:20:13.582811 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5b7v4"] Mar 19 17:20:14 crc kubenswrapper[4792]: I0319 17:20:14.285868 4792 generic.go:334] "Generic (PLEG): container finished" podID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerID="cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d" exitCode=0 Mar 19 17:20:14 crc kubenswrapper[4792]: I0319 17:20:14.285929 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b7v4" event={"ID":"a85cc9a6-90fb-435c-9d7f-d2b93a344853","Type":"ContainerDied","Data":"cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d"} Mar 19 17:20:14 crc kubenswrapper[4792]: I0319 17:20:14.286144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b7v4" event={"ID":"a85cc9a6-90fb-435c-9d7f-d2b93a344853","Type":"ContainerStarted","Data":"35bd8c99d8d1b2672dcb262b3eade8b8bea1227468125b38cc05f35e03d20f76"} Mar 19 17:20:16 crc kubenswrapper[4792]: I0319 17:20:16.310318 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b7v4" event={"ID":"a85cc9a6-90fb-435c-9d7f-d2b93a344853","Type":"ContainerStarted","Data":"1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9"} Mar 19 17:20:16 crc kubenswrapper[4792]: I0319 17:20:16.312756 4792 generic.go:334] "Generic (PLEG): container finished" podID="e670e3cd-afa3-46ee-877d-1e0b61c4cbe7" containerID="70e02e40d4bfcabba28231ac0d05aeb5abd0bbb74249f3c95b03189b00d37eeb" exitCode=0 Mar 19 17:20:16 crc kubenswrapper[4792]: I0319 17:20:16.312800 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" event={"ID":"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7","Type":"ContainerDied","Data":"70e02e40d4bfcabba28231ac0d05aeb5abd0bbb74249f3c95b03189b00d37eeb"} Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.324616 4792 generic.go:334] "Generic (PLEG): container finished" podID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerID="1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9" exitCode=0 Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.324676 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b7v4" event={"ID":"a85cc9a6-90fb-435c-9d7f-d2b93a344853","Type":"ContainerDied","Data":"1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9"} Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.812183 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.932266 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-ssh-key-openstack-edpm-ipam\") pod \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.933053 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-inventory\") pod \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.933297 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29hsp\" (UniqueName: \"kubernetes.io/projected/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-kube-api-access-29hsp\") pod \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\" (UID: \"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7\") " Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.939146 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-kube-api-access-29hsp" (OuterVolumeSpecName: "kube-api-access-29hsp") pod "e670e3cd-afa3-46ee-877d-1e0b61c4cbe7" (UID: "e670e3cd-afa3-46ee-877d-1e0b61c4cbe7"). InnerVolumeSpecName "kube-api-access-29hsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.964056 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-inventory" (OuterVolumeSpecName: "inventory") pod "e670e3cd-afa3-46ee-877d-1e0b61c4cbe7" (UID: "e670e3cd-afa3-46ee-877d-1e0b61c4cbe7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:20:17 crc kubenswrapper[4792]: I0319 17:20:17.969907 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e670e3cd-afa3-46ee-877d-1e0b61c4cbe7" (UID: "e670e3cd-afa3-46ee-877d-1e0b61c4cbe7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.037325 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29hsp\" (UniqueName: \"kubernetes.io/projected/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-kube-api-access-29hsp\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.037365 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.037383 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e670e3cd-afa3-46ee-877d-1e0b61c4cbe7-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.339597 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b7v4" event={"ID":"a85cc9a6-90fb-435c-9d7f-d2b93a344853","Type":"ContainerStarted","Data":"ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee"} Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.342141 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.342117 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mf8b8" event={"ID":"e670e3cd-afa3-46ee-877d-1e0b61c4cbe7","Type":"ContainerDied","Data":"9e2c2f6f67720644827ee521b6f61ab61763473c2fdcc5554e8a0ea512be3987"} Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.342280 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2c2f6f67720644827ee521b6f61ab61763473c2fdcc5554e8a0ea512be3987" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.401636 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5b7v4" podStartSLOduration=2.575192656 podStartE2EDuration="6.401616312s" podCreationTimestamp="2026-03-19 17:20:12 +0000 UTC" firstStartedPulling="2026-03-19 17:20:14.287902956 +0000 UTC m=+2377.433960496" lastFinishedPulling="2026-03-19 17:20:18.114326612 +0000 UTC m=+2381.260384152" observedRunningTime="2026-03-19 17:20:18.383743178 +0000 UTC m=+2381.529800718" watchObservedRunningTime="2026-03-19 17:20:18.401616312 +0000 UTC m=+2381.547673852" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.465506 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9"] Mar 19 17:20:18 crc kubenswrapper[4792]: E0319 17:20:18.466080 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e670e3cd-afa3-46ee-877d-1e0b61c4cbe7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.466098 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e670e3cd-afa3-46ee-877d-1e0b61c4cbe7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.466361 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e670e3cd-afa3-46ee-877d-1e0b61c4cbe7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.467241 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.475254 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.475494 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.475869 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.476127 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.482101 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9"] Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.552368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhjmh\" (UniqueName: \"kubernetes.io/projected/613fdf94-6607-47f4-aa3a-c99c1c500b9e-kube-api-access-hhjmh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.552811 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.552952 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.655308 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.655407 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.655600 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhjmh\" (UniqueName: \"kubernetes.io/projected/613fdf94-6607-47f4-aa3a-c99c1c500b9e-kube-api-access-hhjmh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.660622 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.661987 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.670628 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhjmh\" (UniqueName: \"kubernetes.io/projected/613fdf94-6607-47f4-aa3a-c99c1c500b9e-kube-api-access-hhjmh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:18 crc kubenswrapper[4792]: I0319 17:20:18.789730 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:20:19 crc kubenswrapper[4792]: I0319 17:20:19.389565 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9"] Mar 19 17:20:20 crc kubenswrapper[4792]: I0319 17:20:20.362859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" event={"ID":"613fdf94-6607-47f4-aa3a-c99c1c500b9e","Type":"ContainerStarted","Data":"236f3dc8a98180a65a7a953d5c4e224ba84d812e09a2be5e6a469ffb19ed90a7"} Mar 19 17:20:20 crc kubenswrapper[4792]: I0319 17:20:20.363309 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" event={"ID":"613fdf94-6607-47f4-aa3a-c99c1c500b9e","Type":"ContainerStarted","Data":"f381b8a37874e297b2f4efe3a4864aa2917854f90ce31822d9e00e89ba6e3311"} Mar 19 17:20:20 crc kubenswrapper[4792]: I0319 17:20:20.383722 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" podStartSLOduration=1.9401166989999998 podStartE2EDuration="2.383704163s" podCreationTimestamp="2026-03-19 17:20:18 +0000 UTC" firstStartedPulling="2026-03-19 17:20:19.395659435 +0000 UTC m=+2382.541716975" lastFinishedPulling="2026-03-19 17:20:19.839246899 +0000 UTC m=+2382.985304439" observedRunningTime="2026-03-19 17:20:20.378339396 +0000 UTC m=+2383.524396936" watchObservedRunningTime="2026-03-19 17:20:20.383704163 +0000 UTC m=+2383.529761703" Mar 19 17:20:20 crc kubenswrapper[4792]: I0319 17:20:20.739934 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:20:20 crc kubenswrapper[4792]: E0319 17:20:20.740632 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:20:23 crc kubenswrapper[4792]: I0319 17:20:23.043253 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:23 crc kubenswrapper[4792]: I0319 17:20:23.044996 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:23 crc kubenswrapper[4792]: I0319 17:20:23.104334 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:23 crc kubenswrapper[4792]: I0319 17:20:23.450725 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:23 crc kubenswrapper[4792]: I0319 17:20:23.513724 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5b7v4"] Mar 19 17:20:25 crc kubenswrapper[4792]: I0319 17:20:25.420502 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5b7v4" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerName="registry-server" containerID="cri-o://ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee" gracePeriod=2 Mar 19 17:20:25 crc kubenswrapper[4792]: I0319 17:20:25.990000 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.172806 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-catalog-content\") pod \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.173298 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-utilities\") pod \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.173340 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqwrn\" (UniqueName: \"kubernetes.io/projected/a85cc9a6-90fb-435c-9d7f-d2b93a344853-kube-api-access-vqwrn\") pod \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\" (UID: \"a85cc9a6-90fb-435c-9d7f-d2b93a344853\") " Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.174270 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-utilities" (OuterVolumeSpecName: "utilities") pod "a85cc9a6-90fb-435c-9d7f-d2b93a344853" (UID: "a85cc9a6-90fb-435c-9d7f-d2b93a344853"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.180915 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85cc9a6-90fb-435c-9d7f-d2b93a344853-kube-api-access-vqwrn" (OuterVolumeSpecName: "kube-api-access-vqwrn") pod "a85cc9a6-90fb-435c-9d7f-d2b93a344853" (UID: "a85cc9a6-90fb-435c-9d7f-d2b93a344853"). InnerVolumeSpecName "kube-api-access-vqwrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.263712 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a85cc9a6-90fb-435c-9d7f-d2b93a344853" (UID: "a85cc9a6-90fb-435c-9d7f-d2b93a344853"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.277465 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.277500 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85cc9a6-90fb-435c-9d7f-d2b93a344853-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.277530 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqwrn\" (UniqueName: \"kubernetes.io/projected/a85cc9a6-90fb-435c-9d7f-d2b93a344853-kube-api-access-vqwrn\") on node \"crc\" DevicePath \"\"" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.433402 4792 generic.go:334] "Generic (PLEG): container finished" podID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerID="ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee" exitCode=0 Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.434029 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5b7v4" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.434033 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b7v4" event={"ID":"a85cc9a6-90fb-435c-9d7f-d2b93a344853","Type":"ContainerDied","Data":"ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee"} Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.436079 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b7v4" event={"ID":"a85cc9a6-90fb-435c-9d7f-d2b93a344853","Type":"ContainerDied","Data":"35bd8c99d8d1b2672dcb262b3eade8b8bea1227468125b38cc05f35e03d20f76"} Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.436137 4792 scope.go:117] "RemoveContainer" containerID="ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.457593 4792 scope.go:117] "RemoveContainer" containerID="1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.489781 4792 scope.go:117] "RemoveContainer" containerID="cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.496928 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5b7v4"] Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.507829 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5b7v4"] Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.555782 4792 scope.go:117] "RemoveContainer" containerID="ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee" Mar 19 17:20:26 crc kubenswrapper[4792]: E0319 17:20:26.556249 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee\": container with ID starting with ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee not found: ID does not exist" containerID="ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.556297 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee"} err="failed to get container status \"ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee\": rpc error: code = NotFound desc = could not find container \"ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee\": container with ID starting with ee76aa429450ed9c515e2c1e277967812e7d265c5bbface944228a3b12a142ee not found: ID does not exist" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.556320 4792 scope.go:117] "RemoveContainer" containerID="1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9" Mar 19 17:20:26 crc kubenswrapper[4792]: E0319 17:20:26.556629 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9\": container with ID starting with 1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9 not found: ID does not exist" containerID="1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.556672 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9"} err="failed to get container status \"1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9\": rpc error: code = NotFound desc = could not find container \"1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9\": container with ID starting with 1af005f2414d62fe29a1c72654dae85b52dd9b76614169a2f656842c535756d9 not found: ID does not exist" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.556699 4792 scope.go:117] "RemoveContainer" containerID="cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d" Mar 19 17:20:26 crc kubenswrapper[4792]: E0319 17:20:26.556986 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d\": container with ID starting with cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d not found: ID does not exist" containerID="cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d" Mar 19 17:20:26 crc kubenswrapper[4792]: I0319 17:20:26.557028 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d"} err="failed to get container status \"cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d\": rpc error: code = NotFound desc = could not find container \"cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d\": container with ID starting with cccaaf1b0d427eec05aa974aba1a4074136002c61e1a49cf44fc8005d853119d not found: ID does not exist" Mar 19 17:20:27 crc kubenswrapper[4792]: I0319 17:20:27.753103 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" path="/var/lib/kubelet/pods/a85cc9a6-90fb-435c-9d7f-d2b93a344853/volumes" Mar 19 17:20:32 crc kubenswrapper[4792]: I0319 17:20:32.739535 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:20:32 crc kubenswrapper[4792]: E0319 17:20:32.740402 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:20:33 crc kubenswrapper[4792]: I0319 17:20:33.061888 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-zb8c4"] Mar 19 17:20:33 crc kubenswrapper[4792]: I0319 17:20:33.080011 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-zb8c4"] Mar 19 17:20:33 crc kubenswrapper[4792]: I0319 17:20:33.753605 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79cca8f9-a74b-422e-bded-61895a61cafc" path="/var/lib/kubelet/pods/79cca8f9-a74b-422e-bded-61895a61cafc/volumes" Mar 19 17:20:43 crc kubenswrapper[4792]: I0319 17:20:43.740403 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:20:43 crc kubenswrapper[4792]: E0319 17:20:43.741174 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:20:54 crc kubenswrapper[4792]: I0319 17:20:54.740182 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:20:54 crc kubenswrapper[4792]: E0319 17:20:54.741254 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:21:06 crc kubenswrapper[4792]: I0319 17:21:06.398010 4792 scope.go:117] "RemoveContainer" containerID="adb066379e5212825b668ce24f65347b0967df724c90ad6fe784edcd25a7a904" Mar 19 17:21:06 crc kubenswrapper[4792]: I0319 17:21:06.437260 4792 scope.go:117] "RemoveContainer" containerID="6ca3498d6b52c51c42ca0813b56546c94d3ed426e57e93a87b6b6a890ecc975e" Mar 19 17:21:07 crc kubenswrapper[4792]: I0319 17:21:07.746997 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:21:07 crc kubenswrapper[4792]: E0319 17:21:07.747710 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:21:09 crc kubenswrapper[4792]: I0319 17:21:09.019360 4792 generic.go:334] "Generic (PLEG): container finished" podID="613fdf94-6607-47f4-aa3a-c99c1c500b9e" containerID="236f3dc8a98180a65a7a953d5c4e224ba84d812e09a2be5e6a469ffb19ed90a7" exitCode=0 Mar 19 17:21:09 crc kubenswrapper[4792]: I0319 17:21:09.019408 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" event={"ID":"613fdf94-6607-47f4-aa3a-c99c1c500b9e","Type":"ContainerDied","Data":"236f3dc8a98180a65a7a953d5c4e224ba84d812e09a2be5e6a469ffb19ed90a7"} Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.644904 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.656775 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhjmh\" (UniqueName: \"kubernetes.io/projected/613fdf94-6607-47f4-aa3a-c99c1c500b9e-kube-api-access-hhjmh\") pod \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.657118 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-inventory\") pod \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.657215 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-ssh-key-openstack-edpm-ipam\") pod \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\" (UID: \"613fdf94-6607-47f4-aa3a-c99c1c500b9e\") " Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.664859 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613fdf94-6607-47f4-aa3a-c99c1c500b9e-kube-api-access-hhjmh" (OuterVolumeSpecName: "kube-api-access-hhjmh") pod "613fdf94-6607-47f4-aa3a-c99c1c500b9e" (UID: "613fdf94-6607-47f4-aa3a-c99c1c500b9e"). InnerVolumeSpecName "kube-api-access-hhjmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.697528 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-inventory" (OuterVolumeSpecName: "inventory") pod "613fdf94-6607-47f4-aa3a-c99c1c500b9e" (UID: "613fdf94-6607-47f4-aa3a-c99c1c500b9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.697919 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "613fdf94-6607-47f4-aa3a-c99c1c500b9e" (UID: "613fdf94-6607-47f4-aa3a-c99c1c500b9e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.759389 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.759423 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/613fdf94-6607-47f4-aa3a-c99c1c500b9e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:10 crc kubenswrapper[4792]: I0319 17:21:10.759435 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhjmh\" (UniqueName: \"kubernetes.io/projected/613fdf94-6607-47f4-aa3a-c99c1c500b9e-kube-api-access-hhjmh\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.041355 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" event={"ID":"613fdf94-6607-47f4-aa3a-c99c1c500b9e","Type":"ContainerDied","Data":"f381b8a37874e297b2f4efe3a4864aa2917854f90ce31822d9e00e89ba6e3311"} Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.041713 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f381b8a37874e297b2f4efe3a4864aa2917854f90ce31822d9e00e89ba6e3311" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.041450 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.134002 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6qj6v"] Mar 19 17:21:11 crc kubenswrapper[4792]: E0319 17:21:11.134484 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerName="registry-server" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.134500 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerName="registry-server" Mar 19 17:21:11 crc kubenswrapper[4792]: E0319 17:21:11.134510 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613fdf94-6607-47f4-aa3a-c99c1c500b9e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.134518 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="613fdf94-6607-47f4-aa3a-c99c1c500b9e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:11 crc kubenswrapper[4792]: E0319 17:21:11.134530 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerName="extract-content" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.134536 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerName="extract-content" Mar 19 17:21:11 crc kubenswrapper[4792]: E0319 17:21:11.134554 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerName="extract-utilities" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.134562 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerName="extract-utilities" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.134757 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a85cc9a6-90fb-435c-9d7f-d2b93a344853" containerName="registry-server" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.134781 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="613fdf94-6607-47f4-aa3a-c99c1c500b9e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.135613 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.143902 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.145021 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.145167 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.145689 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.150667 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6qj6v"] Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.270508 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfw72\" (UniqueName: \"kubernetes.io/projected/4b88ce9a-9321-442b-ad61-bf8bdf229685-kube-api-access-kfw72\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.270833 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.271013 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.374870 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.375133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfw72\" (UniqueName: \"kubernetes.io/projected/4b88ce9a-9321-442b-ad61-bf8bdf229685-kube-api-access-kfw72\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.375325 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.379703 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.381302 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.440118 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfw72\" (UniqueName: \"kubernetes.io/projected/4b88ce9a-9321-442b-ad61-bf8bdf229685-kube-api-access-kfw72\") pod \"ssh-known-hosts-edpm-deployment-6qj6v\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:11 crc kubenswrapper[4792]: I0319 17:21:11.453188 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:12 crc kubenswrapper[4792]: I0319 17:21:12.187827 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6qj6v"] Mar 19 17:21:12 crc kubenswrapper[4792]: W0319 17:21:12.190159 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b88ce9a_9321_442b_ad61_bf8bdf229685.slice/crio-bc6158166648002fdd38b2be53257802a719985dca0e698a8441dd4ee21fcac8 WatchSource:0}: Error finding container bc6158166648002fdd38b2be53257802a719985dca0e698a8441dd4ee21fcac8: Status 404 returned error can't find the container with id bc6158166648002fdd38b2be53257802a719985dca0e698a8441dd4ee21fcac8 Mar 19 17:21:12 crc kubenswrapper[4792]: I0319 17:21:12.193185 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:21:13 crc kubenswrapper[4792]: I0319 17:21:13.064398 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" event={"ID":"4b88ce9a-9321-442b-ad61-bf8bdf229685","Type":"ContainerStarted","Data":"ca4e268c03b53edb6203ed169409d00fe3180fce1a57f22913128ea2613e4881"} Mar 19 17:21:13 crc kubenswrapper[4792]: I0319 17:21:13.065121 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" event={"ID":"4b88ce9a-9321-442b-ad61-bf8bdf229685","Type":"ContainerStarted","Data":"bc6158166648002fdd38b2be53257802a719985dca0e698a8441dd4ee21fcac8"} Mar 19 17:21:13 crc kubenswrapper[4792]: I0319 17:21:13.098257 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" podStartSLOduration=1.61268456 podStartE2EDuration="2.098236872s" podCreationTimestamp="2026-03-19 17:21:11 +0000 UTC" firstStartedPulling="2026-03-19 17:21:12.192934608 +0000 UTC m=+2435.338992148" lastFinishedPulling="2026-03-19 17:21:12.67848692 +0000 UTC m=+2435.824544460" observedRunningTime="2026-03-19 17:21:13.090451439 +0000 UTC m=+2436.236508979" watchObservedRunningTime="2026-03-19 17:21:13.098236872 +0000 UTC m=+2436.244294412" Mar 19 17:21:19 crc kubenswrapper[4792]: I0319 17:21:19.740236 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:21:19 crc kubenswrapper[4792]: E0319 17:21:19.741093 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:21:20 crc kubenswrapper[4792]: I0319 17:21:20.178079 4792 generic.go:334] "Generic (PLEG): container finished" podID="4b88ce9a-9321-442b-ad61-bf8bdf229685" containerID="ca4e268c03b53edb6203ed169409d00fe3180fce1a57f22913128ea2613e4881" exitCode=0 Mar 19 17:21:20 crc kubenswrapper[4792]: I0319 17:21:20.178153 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" event={"ID":"4b88ce9a-9321-442b-ad61-bf8bdf229685","Type":"ContainerDied","Data":"ca4e268c03b53edb6203ed169409d00fe3180fce1a57f22913128ea2613e4881"} Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.702540 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.852735 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-inventory-0\") pod \"4b88ce9a-9321-442b-ad61-bf8bdf229685\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.853171 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfw72\" (UniqueName: \"kubernetes.io/projected/4b88ce9a-9321-442b-ad61-bf8bdf229685-kube-api-access-kfw72\") pod \"4b88ce9a-9321-442b-ad61-bf8bdf229685\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.853243 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-ssh-key-openstack-edpm-ipam\") pod \"4b88ce9a-9321-442b-ad61-bf8bdf229685\" (UID: \"4b88ce9a-9321-442b-ad61-bf8bdf229685\") " Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.861040 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b88ce9a-9321-442b-ad61-bf8bdf229685-kube-api-access-kfw72" (OuterVolumeSpecName: "kube-api-access-kfw72") pod "4b88ce9a-9321-442b-ad61-bf8bdf229685" (UID: "4b88ce9a-9321-442b-ad61-bf8bdf229685"). InnerVolumeSpecName "kube-api-access-kfw72". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.887570 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4b88ce9a-9321-442b-ad61-bf8bdf229685" (UID: "4b88ce9a-9321-442b-ad61-bf8bdf229685"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.888324 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "4b88ce9a-9321-442b-ad61-bf8bdf229685" (UID: "4b88ce9a-9321-442b-ad61-bf8bdf229685"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.956280 4792 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.956317 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfw72\" (UniqueName: \"kubernetes.io/projected/4b88ce9a-9321-442b-ad61-bf8bdf229685-kube-api-access-kfw72\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:21 crc kubenswrapper[4792]: I0319 17:21:21.956333 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b88ce9a-9321-442b-ad61-bf8bdf229685-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.201718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" event={"ID":"4b88ce9a-9321-442b-ad61-bf8bdf229685","Type":"ContainerDied","Data":"bc6158166648002fdd38b2be53257802a719985dca0e698a8441dd4ee21fcac8"} Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.201763 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc6158166648002fdd38b2be53257802a719985dca0e698a8441dd4ee21fcac8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.201829 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6qj6v" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.295544 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8"] Mar 19 17:21:22 crc kubenswrapper[4792]: E0319 17:21:22.296281 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b88ce9a-9321-442b-ad61-bf8bdf229685" containerName="ssh-known-hosts-edpm-deployment" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.296302 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b88ce9a-9321-442b-ad61-bf8bdf229685" containerName="ssh-known-hosts-edpm-deployment" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.296556 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b88ce9a-9321-442b-ad61-bf8bdf229685" containerName="ssh-known-hosts-edpm-deployment" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.297679 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.299925 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.300566 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.300809 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.300869 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.306313 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8"] Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.470616 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.470795 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.471236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dk62\" (UniqueName: \"kubernetes.io/projected/82a4769a-60b7-4414-a78b-c51858d9746f-kube-api-access-5dk62\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.574657 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.574815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.574957 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dk62\" (UniqueName: \"kubernetes.io/projected/82a4769a-60b7-4414-a78b-c51858d9746f-kube-api-access-5dk62\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.580390 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.588009 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.606316 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dk62\" (UniqueName: \"kubernetes.io/projected/82a4769a-60b7-4414-a78b-c51858d9746f-kube-api-access-5dk62\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz4b8\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:22 crc kubenswrapper[4792]: I0319 17:21:22.615578 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:23 crc kubenswrapper[4792]: I0319 17:21:23.201379 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8"] Mar 19 17:21:24 crc kubenswrapper[4792]: I0319 17:21:24.235288 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" event={"ID":"82a4769a-60b7-4414-a78b-c51858d9746f","Type":"ContainerStarted","Data":"06e23ff77cf561860c4b8b4f6ebcc7bc6c63c0a9bb511599b8db8faf44cfdfcf"} Mar 19 17:21:24 crc kubenswrapper[4792]: I0319 17:21:24.235805 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" event={"ID":"82a4769a-60b7-4414-a78b-c51858d9746f","Type":"ContainerStarted","Data":"25f35bc549937f0f8c036a8cb63831bd47ead96b0132166676086d22d7014288"} Mar 19 17:21:24 crc kubenswrapper[4792]: I0319 17:21:24.263985 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" podStartSLOduration=1.711867179 podStartE2EDuration="2.263962102s" podCreationTimestamp="2026-03-19 17:21:22 +0000 UTC" firstStartedPulling="2026-03-19 17:21:23.240897106 +0000 UTC m=+2446.386954646" lastFinishedPulling="2026-03-19 17:21:23.792992029 +0000 UTC m=+2446.939049569" observedRunningTime="2026-03-19 17:21:24.24962605 +0000 UTC m=+2447.395683590" watchObservedRunningTime="2026-03-19 17:21:24.263962102 +0000 UTC m=+2447.410019642" Mar 19 17:21:31 crc kubenswrapper[4792]: I0319 17:21:31.739491 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:21:31 crc kubenswrapper[4792]: E0319 17:21:31.740384 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:21:32 crc kubenswrapper[4792]: I0319 17:21:32.336632 4792 generic.go:334] "Generic (PLEG): container finished" podID="82a4769a-60b7-4414-a78b-c51858d9746f" containerID="06e23ff77cf561860c4b8b4f6ebcc7bc6c63c0a9bb511599b8db8faf44cfdfcf" exitCode=0 Mar 19 17:21:32 crc kubenswrapper[4792]: I0319 17:21:32.336814 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" event={"ID":"82a4769a-60b7-4414-a78b-c51858d9746f","Type":"ContainerDied","Data":"06e23ff77cf561860c4b8b4f6ebcc7bc6c63c0a9bb511599b8db8faf44cfdfcf"} Mar 19 17:21:33 crc kubenswrapper[4792]: I0319 17:21:33.804628 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:33 crc kubenswrapper[4792]: I0319 17:21:33.974492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-ssh-key-openstack-edpm-ipam\") pod \"82a4769a-60b7-4414-a78b-c51858d9746f\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " Mar 19 17:21:33 crc kubenswrapper[4792]: I0319 17:21:33.974901 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dk62\" (UniqueName: \"kubernetes.io/projected/82a4769a-60b7-4414-a78b-c51858d9746f-kube-api-access-5dk62\") pod \"82a4769a-60b7-4414-a78b-c51858d9746f\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " Mar 19 17:21:33 crc kubenswrapper[4792]: I0319 17:21:33.975061 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-inventory\") pod \"82a4769a-60b7-4414-a78b-c51858d9746f\" (UID: \"82a4769a-60b7-4414-a78b-c51858d9746f\") " Mar 19 17:21:33 crc kubenswrapper[4792]: I0319 17:21:33.983688 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a4769a-60b7-4414-a78b-c51858d9746f-kube-api-access-5dk62" (OuterVolumeSpecName: "kube-api-access-5dk62") pod "82a4769a-60b7-4414-a78b-c51858d9746f" (UID: "82a4769a-60b7-4414-a78b-c51858d9746f"). InnerVolumeSpecName "kube-api-access-5dk62". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.011753 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "82a4769a-60b7-4414-a78b-c51858d9746f" (UID: "82a4769a-60b7-4414-a78b-c51858d9746f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.025659 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-inventory" (OuterVolumeSpecName: "inventory") pod "82a4769a-60b7-4414-a78b-c51858d9746f" (UID: "82a4769a-60b7-4414-a78b-c51858d9746f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.077690 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.077725 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dk62\" (UniqueName: \"kubernetes.io/projected/82a4769a-60b7-4414-a78b-c51858d9746f-kube-api-access-5dk62\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.077737 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82a4769a-60b7-4414-a78b-c51858d9746f-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.358712 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" event={"ID":"82a4769a-60b7-4414-a78b-c51858d9746f","Type":"ContainerDied","Data":"25f35bc549937f0f8c036a8cb63831bd47ead96b0132166676086d22d7014288"} Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.358747 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f35bc549937f0f8c036a8cb63831bd47ead96b0132166676086d22d7014288" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.358757 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz4b8" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.455025 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l"] Mar 19 17:21:34 crc kubenswrapper[4792]: E0319 17:21:34.455954 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a4769a-60b7-4414-a78b-c51858d9746f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.455981 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a4769a-60b7-4414-a78b-c51858d9746f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.456292 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a4769a-60b7-4414-a78b-c51858d9746f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.457507 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.461508 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.461618 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.461668 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.465792 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.480231 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l"] Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.595597 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62s4x\" (UniqueName: \"kubernetes.io/projected/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-kube-api-access-62s4x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.595644 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.595917 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.698623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62s4x\" (UniqueName: \"kubernetes.io/projected/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-kube-api-access-62s4x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.698694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.698810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.704339 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.705356 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.714896 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62s4x\" (UniqueName: \"kubernetes.io/projected/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-kube-api-access-62s4x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:34 crc kubenswrapper[4792]: I0319 17:21:34.787631 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:35 crc kubenswrapper[4792]: I0319 17:21:35.396714 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l"] Mar 19 17:21:36 crc kubenswrapper[4792]: I0319 17:21:36.383975 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" event={"ID":"32de0a99-ddf0-436b-ab7a-9223cc6d5de1","Type":"ContainerStarted","Data":"c1ead2a6ed7c8a9440978a205ffe5c480f102fce07e88959a856fa93fa8a7cc2"} Mar 19 17:21:36 crc kubenswrapper[4792]: I0319 17:21:36.384711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" event={"ID":"32de0a99-ddf0-436b-ab7a-9223cc6d5de1","Type":"ContainerStarted","Data":"3afdb525425c90f01ca9a7a9dc9c729b990cbd38098e3f8cebe5c5b7a48e6850"} Mar 19 17:21:36 crc kubenswrapper[4792]: I0319 17:21:36.407265 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" podStartSLOduration=1.994020731 podStartE2EDuration="2.407244994s" podCreationTimestamp="2026-03-19 17:21:34 +0000 UTC" firstStartedPulling="2026-03-19 17:21:35.398561102 +0000 UTC m=+2458.544618642" lastFinishedPulling="2026-03-19 17:21:35.811785365 +0000 UTC m=+2458.957842905" observedRunningTime="2026-03-19 17:21:36.401340963 +0000 UTC m=+2459.547398503" watchObservedRunningTime="2026-03-19 17:21:36.407244994 +0000 UTC m=+2459.553302534" Mar 19 17:21:45 crc kubenswrapper[4792]: I0319 17:21:45.494256 4792 generic.go:334] "Generic (PLEG): container finished" podID="32de0a99-ddf0-436b-ab7a-9223cc6d5de1" containerID="c1ead2a6ed7c8a9440978a205ffe5c480f102fce07e88959a856fa93fa8a7cc2" exitCode=0 Mar 19 17:21:45 crc kubenswrapper[4792]: I0319 17:21:45.494388 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" event={"ID":"32de0a99-ddf0-436b-ab7a-9223cc6d5de1","Type":"ContainerDied","Data":"c1ead2a6ed7c8a9440978a205ffe5c480f102fce07e88959a856fa93fa8a7cc2"} Mar 19 17:21:46 crc kubenswrapper[4792]: I0319 17:21:46.741607 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:21:46 crc kubenswrapper[4792]: E0319 17:21:46.742829 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.049786 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.212758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-inventory\") pod \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.212889 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-ssh-key-openstack-edpm-ipam\") pod \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.213158 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62s4x\" (UniqueName: \"kubernetes.io/projected/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-kube-api-access-62s4x\") pod \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\" (UID: \"32de0a99-ddf0-436b-ab7a-9223cc6d5de1\") " Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.217890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-kube-api-access-62s4x" (OuterVolumeSpecName: "kube-api-access-62s4x") pod "32de0a99-ddf0-436b-ab7a-9223cc6d5de1" (UID: "32de0a99-ddf0-436b-ab7a-9223cc6d5de1"). InnerVolumeSpecName "kube-api-access-62s4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.241912 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-inventory" (OuterVolumeSpecName: "inventory") pod "32de0a99-ddf0-436b-ab7a-9223cc6d5de1" (UID: "32de0a99-ddf0-436b-ab7a-9223cc6d5de1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.254676 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "32de0a99-ddf0-436b-ab7a-9223cc6d5de1" (UID: "32de0a99-ddf0-436b-ab7a-9223cc6d5de1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.317811 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.317942 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62s4x\" (UniqueName: \"kubernetes.io/projected/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-kube-api-access-62s4x\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.317974 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32de0a99-ddf0-436b-ab7a-9223cc6d5de1-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.528396 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" event={"ID":"32de0a99-ddf0-436b-ab7a-9223cc6d5de1","Type":"ContainerDied","Data":"3afdb525425c90f01ca9a7a9dc9c729b990cbd38098e3f8cebe5c5b7a48e6850"} Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.528439 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3afdb525425c90f01ca9a7a9dc9c729b990cbd38098e3f8cebe5c5b7a48e6850" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.528499 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.672665 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl"] Mar 19 17:21:47 crc kubenswrapper[4792]: E0319 17:21:47.673642 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32de0a99-ddf0-436b-ab7a-9223cc6d5de1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.673694 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="32de0a99-ddf0-436b-ab7a-9223cc6d5de1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.674152 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="32de0a99-ddf0-436b-ab7a-9223cc6d5de1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.675555 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.678138 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.678371 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.678386 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.678215 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.678670 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.678934 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.679186 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.680543 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.681516 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.702371 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl"] Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.829547 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.829654 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhv9s\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-kube-api-access-lhv9s\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.829681 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.829756 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.829881 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.829975 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830382 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830564 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830596 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830638 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830718 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830796 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830902 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.830983 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.831010 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933068 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933691 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933722 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933793 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933823 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933890 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933910 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933934 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933961 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.933988 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.934012 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.934034 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.934051 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.934089 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.934144 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhv9s\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-kube-api-access-lhv9s\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.934165 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.938019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.938101 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.938584 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.939080 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.939393 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.939419 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.940028 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.940228 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.940382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.940980 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.941543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.942369 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.942893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.945076 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.945476 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.951723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhv9s\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-kube-api-access-lhv9s\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:47 crc kubenswrapper[4792]: I0319 17:21:47.994457 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:21:48 crc kubenswrapper[4792]: I0319 17:21:48.541648 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl"] Mar 19 17:21:49 crc kubenswrapper[4792]: I0319 17:21:49.549506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" event={"ID":"f429099b-d78b-4ecc-9606-16da762eb608","Type":"ContainerStarted","Data":"8d1fa35fe77460745beb22d20e54202368ffb306fc13c6c5893d97e61a29a5a0"} Mar 19 17:21:49 crc kubenswrapper[4792]: I0319 17:21:49.549855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" event={"ID":"f429099b-d78b-4ecc-9606-16da762eb608","Type":"ContainerStarted","Data":"ebb3c23412084c5760e62d283b9ed5b13d12fbd476cb29329f1a28deaa1292a0"} Mar 19 17:21:49 crc kubenswrapper[4792]: I0319 17:21:49.578135 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" podStartSLOduration=2.127596202 podStartE2EDuration="2.578104595s" podCreationTimestamp="2026-03-19 17:21:47 +0000 UTC" firstStartedPulling="2026-03-19 17:21:48.54191661 +0000 UTC m=+2471.687974150" lastFinishedPulling="2026-03-19 17:21:48.992424993 +0000 UTC m=+2472.138482543" observedRunningTime="2026-03-19 17:21:49.572763569 +0000 UTC m=+2472.718821119" watchObservedRunningTime="2026-03-19 17:21:49.578104595 +0000 UTC m=+2472.724162165" Mar 19 17:21:59 crc kubenswrapper[4792]: I0319 17:21:59.740197 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:21:59 crc kubenswrapper[4792]: E0319 17:21:59.741027 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.161897 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565682-ntsm2"] Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.165193 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565682-ntsm2" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.174439 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.174638 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.175451 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.193548 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565682-ntsm2"] Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.289131 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skrh2\" (UniqueName: \"kubernetes.io/projected/b71ac1bc-4e86-4aea-8c12-2925090c5b44-kube-api-access-skrh2\") pod \"auto-csr-approver-29565682-ntsm2\" (UID: \"b71ac1bc-4e86-4aea-8c12-2925090c5b44\") " pod="openshift-infra/auto-csr-approver-29565682-ntsm2" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.391376 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skrh2\" (UniqueName: \"kubernetes.io/projected/b71ac1bc-4e86-4aea-8c12-2925090c5b44-kube-api-access-skrh2\") pod \"auto-csr-approver-29565682-ntsm2\" (UID: \"b71ac1bc-4e86-4aea-8c12-2925090c5b44\") " pod="openshift-infra/auto-csr-approver-29565682-ntsm2" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.426994 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skrh2\" (UniqueName: \"kubernetes.io/projected/b71ac1bc-4e86-4aea-8c12-2925090c5b44-kube-api-access-skrh2\") pod \"auto-csr-approver-29565682-ntsm2\" (UID: \"b71ac1bc-4e86-4aea-8c12-2925090c5b44\") " pod="openshift-infra/auto-csr-approver-29565682-ntsm2" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.511972 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565682-ntsm2" Mar 19 17:22:00 crc kubenswrapper[4792]: I0319 17:22:00.999034 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565682-ntsm2"] Mar 19 17:22:01 crc kubenswrapper[4792]: I0319 17:22:01.712296 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565682-ntsm2" event={"ID":"b71ac1bc-4e86-4aea-8c12-2925090c5b44","Type":"ContainerStarted","Data":"95a0e337ddc2eda914ee644f1aab4137becd48dc89446e9a45c26cf4daa0be1c"} Mar 19 17:22:02 crc kubenswrapper[4792]: I0319 17:22:02.724834 4792 generic.go:334] "Generic (PLEG): container finished" podID="b71ac1bc-4e86-4aea-8c12-2925090c5b44" containerID="19a22a7c9ff093af88c1f5badba413c1b1a7be745bb05f0f1d7bb9bfbed1a47c" exitCode=0 Mar 19 17:22:02 crc kubenswrapper[4792]: I0319 17:22:02.724906 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565682-ntsm2" event={"ID":"b71ac1bc-4e86-4aea-8c12-2925090c5b44","Type":"ContainerDied","Data":"19a22a7c9ff093af88c1f5badba413c1b1a7be745bb05f0f1d7bb9bfbed1a47c"} Mar 19 17:22:04 crc kubenswrapper[4792]: I0319 17:22:04.162209 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565682-ntsm2" Mar 19 17:22:04 crc kubenswrapper[4792]: I0319 17:22:04.205905 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skrh2\" (UniqueName: \"kubernetes.io/projected/b71ac1bc-4e86-4aea-8c12-2925090c5b44-kube-api-access-skrh2\") pod \"b71ac1bc-4e86-4aea-8c12-2925090c5b44\" (UID: \"b71ac1bc-4e86-4aea-8c12-2925090c5b44\") " Mar 19 17:22:04 crc kubenswrapper[4792]: I0319 17:22:04.213166 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71ac1bc-4e86-4aea-8c12-2925090c5b44-kube-api-access-skrh2" (OuterVolumeSpecName: "kube-api-access-skrh2") pod "b71ac1bc-4e86-4aea-8c12-2925090c5b44" (UID: "b71ac1bc-4e86-4aea-8c12-2925090c5b44"). InnerVolumeSpecName "kube-api-access-skrh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:22:04 crc kubenswrapper[4792]: I0319 17:22:04.308211 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skrh2\" (UniqueName: \"kubernetes.io/projected/b71ac1bc-4e86-4aea-8c12-2925090c5b44-kube-api-access-skrh2\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:04 crc kubenswrapper[4792]: I0319 17:22:04.752244 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565682-ntsm2" event={"ID":"b71ac1bc-4e86-4aea-8c12-2925090c5b44","Type":"ContainerDied","Data":"95a0e337ddc2eda914ee644f1aab4137becd48dc89446e9a45c26cf4daa0be1c"} Mar 19 17:22:04 crc kubenswrapper[4792]: I0319 17:22:04.752290 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95a0e337ddc2eda914ee644f1aab4137becd48dc89446e9a45c26cf4daa0be1c" Mar 19 17:22:04 crc kubenswrapper[4792]: I0319 17:22:04.752300 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565682-ntsm2" Mar 19 17:22:05 crc kubenswrapper[4792]: I0319 17:22:05.228249 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565676-4rrrc"] Mar 19 17:22:05 crc kubenswrapper[4792]: I0319 17:22:05.239703 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565676-4rrrc"] Mar 19 17:22:05 crc kubenswrapper[4792]: I0319 17:22:05.753158 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a399b4b6-821d-48b7-a4e7-8d67428edfa6" path="/var/lib/kubelet/pods/a399b4b6-821d-48b7-a4e7-8d67428edfa6/volumes" Mar 19 17:22:06 crc kubenswrapper[4792]: I0319 17:22:06.574857 4792 scope.go:117] "RemoveContainer" containerID="de1b1ba0db865e7e45c82f8fa7a90f888b31b64f81ef2be5cd00a4708e08e86d" Mar 19 17:22:12 crc kubenswrapper[4792]: I0319 17:22:12.740546 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:22:12 crc kubenswrapper[4792]: E0319 17:22:12.741374 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:22:20 crc kubenswrapper[4792]: I0319 17:22:20.040505 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-6dwwv"] Mar 19 17:22:20 crc kubenswrapper[4792]: I0319 17:22:20.050334 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-6dwwv"] Mar 19 17:22:21 crc kubenswrapper[4792]: I0319 17:22:21.759562 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ee639b-34bf-4824-902d-e38af5ad4527" path="/var/lib/kubelet/pods/a5ee639b-34bf-4824-902d-e38af5ad4527/volumes" Mar 19 17:22:23 crc kubenswrapper[4792]: I0319 17:22:23.741038 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:22:23 crc kubenswrapper[4792]: E0319 17:22:23.742227 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:22:32 crc kubenswrapper[4792]: I0319 17:22:32.040917 4792 generic.go:334] "Generic (PLEG): container finished" podID="f429099b-d78b-4ecc-9606-16da762eb608" containerID="8d1fa35fe77460745beb22d20e54202368ffb306fc13c6c5893d97e61a29a5a0" exitCode=0 Mar 19 17:22:32 crc kubenswrapper[4792]: I0319 17:22:32.041024 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" event={"ID":"f429099b-d78b-4ecc-9606-16da762eb608","Type":"ContainerDied","Data":"8d1fa35fe77460745beb22d20e54202368ffb306fc13c6c5893d97e61a29a5a0"} Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.511558 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540083 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540182 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-power-monitoring-combined-ca-bundle\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540344 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540398 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhv9s\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-kube-api-access-lhv9s\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540465 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-nova-combined-ca-bundle\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540548 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-inventory\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540686 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540759 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ovn-combined-ca-bundle\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540835 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540899 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-bootstrap-combined-ca-bundle\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.540939 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-libvirt-combined-ca-bundle\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.541040 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-combined-ca-bundle\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.541094 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.541195 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-repo-setup-combined-ca-bundle\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.541278 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.541324 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-neutron-metadata-combined-ca-bundle\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.547387 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.548036 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.549352 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.550568 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.552472 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.552514 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.552532 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.553198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-kube-api-access-lhv9s" (OuterVolumeSpecName: "kube-api-access-lhv9s") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "kube-api-access-lhv9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.553738 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.553747 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.557218 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.557809 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.560345 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.561047 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: E0319 17:22:33.582498 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam podName:f429099b-d78b-4ecc-9606-16da762eb608 nodeName:}" failed. No retries permitted until 2026-03-19 17:22:34.08242452 +0000 UTC m=+2517.228482060 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608") : error deleting /var/lib/kubelet/pods/f429099b-d78b-4ecc-9606-16da762eb608/volume-subpaths: remove /var/lib/kubelet/pods/f429099b-d78b-4ecc-9606-16da762eb608/volume-subpaths: no such file or directory Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.587383 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-inventory" (OuterVolumeSpecName: "inventory") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645048 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645081 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhv9s\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-kube-api-access-lhv9s\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645091 4792 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645100 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645110 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645122 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645130 4792 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645139 4792 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645148 4792 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645159 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645170 4792 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645180 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645189 4792 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645198 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f429099b-d78b-4ecc-9606-16da762eb608-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:33 crc kubenswrapper[4792]: I0319 17:22:33.645211 4792 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.063194 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" event={"ID":"f429099b-d78b-4ecc-9606-16da762eb608","Type":"ContainerDied","Data":"ebb3c23412084c5760e62d283b9ed5b13d12fbd476cb29329f1a28deaa1292a0"} Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.063578 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebb3c23412084c5760e62d283b9ed5b13d12fbd476cb29329f1a28deaa1292a0" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.063256 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.156318 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam\") pod \"f429099b-d78b-4ecc-9606-16da762eb608\" (UID: \"f429099b-d78b-4ecc-9606-16da762eb608\") " Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.165181 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f429099b-d78b-4ecc-9606-16da762eb608" (UID: "f429099b-d78b-4ecc-9606-16da762eb608"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.167911 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn"] Mar 19 17:22:34 crc kubenswrapper[4792]: E0319 17:22:34.168575 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f429099b-d78b-4ecc-9606-16da762eb608" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.168600 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f429099b-d78b-4ecc-9606-16da762eb608" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 19 17:22:34 crc kubenswrapper[4792]: E0319 17:22:34.168634 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71ac1bc-4e86-4aea-8c12-2925090c5b44" containerName="oc" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.168643 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71ac1bc-4e86-4aea-8c12-2925090c5b44" containerName="oc" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.168931 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71ac1bc-4e86-4aea-8c12-2925090c5b44" containerName="oc" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.168977 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f429099b-d78b-4ecc-9606-16da762eb608" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.170059 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.171703 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.179345 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn"] Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.258991 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qnw\" (UniqueName: \"kubernetes.io/projected/e472cc5f-822a-4e33-8f16-04cc02cbae89-kube-api-access-f4qnw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.259284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.259385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.259482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.259580 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.259749 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f429099b-d78b-4ecc-9606-16da762eb608-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.362108 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4qnw\" (UniqueName: \"kubernetes.io/projected/e472cc5f-822a-4e33-8f16-04cc02cbae89-kube-api-access-f4qnw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.362214 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.362258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.362300 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.362318 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.363442 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.366893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.367409 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.368937 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.380262 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4qnw\" (UniqueName: \"kubernetes.io/projected/e472cc5f-822a-4e33-8f16-04cc02cbae89-kube-api-access-f4qnw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mhcn\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:34 crc kubenswrapper[4792]: I0319 17:22:34.536376 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:22:35 crc kubenswrapper[4792]: I0319 17:22:35.067076 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn"] Mar 19 17:22:36 crc kubenswrapper[4792]: I0319 17:22:36.088769 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" event={"ID":"e472cc5f-822a-4e33-8f16-04cc02cbae89","Type":"ContainerStarted","Data":"4006bca9452f55ffc6546fb8744843fb5dc8e249566612ae94009c7a4feb67a9"} Mar 19 17:22:36 crc kubenswrapper[4792]: I0319 17:22:36.089420 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" event={"ID":"e472cc5f-822a-4e33-8f16-04cc02cbae89","Type":"ContainerStarted","Data":"c1cf98840acf267700c54013bb299a962e22d6f47a500d50e90266e4d15970b4"} Mar 19 17:22:36 crc kubenswrapper[4792]: I0319 17:22:36.115142 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" podStartSLOduration=1.581167966 podStartE2EDuration="2.115126143s" podCreationTimestamp="2026-03-19 17:22:34 +0000 UTC" firstStartedPulling="2026-03-19 17:22:35.073817488 +0000 UTC m=+2518.219875028" lastFinishedPulling="2026-03-19 17:22:35.607775665 +0000 UTC m=+2518.753833205" observedRunningTime="2026-03-19 17:22:36.112556163 +0000 UTC m=+2519.258613713" watchObservedRunningTime="2026-03-19 17:22:36.115126143 +0000 UTC m=+2519.261183683" Mar 19 17:22:37 crc kubenswrapper[4792]: I0319 17:22:37.750670 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:22:37 crc kubenswrapper[4792]: E0319 17:22:37.751618 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:22:50 crc kubenswrapper[4792]: I0319 17:22:50.740738 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:22:50 crc kubenswrapper[4792]: E0319 17:22:50.741723 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:23:01 crc kubenswrapper[4792]: I0319 17:23:01.051982 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-rcrgb"] Mar 19 17:23:01 crc kubenswrapper[4792]: I0319 17:23:01.064056 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-rcrgb"] Mar 19 17:23:01 crc kubenswrapper[4792]: I0319 17:23:01.755516 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa5959e-fb35-4b6e-95de-d7a87bf4479e" path="/var/lib/kubelet/pods/2fa5959e-fb35-4b6e-95de-d7a87bf4479e/volumes" Mar 19 17:23:04 crc kubenswrapper[4792]: I0319 17:23:04.741159 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:23:04 crc kubenswrapper[4792]: E0319 17:23:04.742412 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:23:06 crc kubenswrapper[4792]: I0319 17:23:06.647600 4792 scope.go:117] "RemoveContainer" containerID="ae006aa0b5bfafe805c62a798245f2fc0ae0a34d6ff2a79ec2f8baa42508d5b3" Mar 19 17:23:06 crc kubenswrapper[4792]: I0319 17:23:06.684982 4792 scope.go:117] "RemoveContainer" containerID="d0847bc6693c17f9272bbcea2dd07bc513bad0510b6baedb34cdc5633b92d5e9" Mar 19 17:23:15 crc kubenswrapper[4792]: I0319 17:23:15.739925 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:23:15 crc kubenswrapper[4792]: E0319 17:23:15.740788 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:23:28 crc kubenswrapper[4792]: I0319 17:23:28.740288 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:23:28 crc kubenswrapper[4792]: E0319 17:23:28.741714 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:23:35 crc kubenswrapper[4792]: I0319 17:23:35.716260 4792 generic.go:334] "Generic (PLEG): container finished" podID="e472cc5f-822a-4e33-8f16-04cc02cbae89" containerID="4006bca9452f55ffc6546fb8744843fb5dc8e249566612ae94009c7a4feb67a9" exitCode=0 Mar 19 17:23:35 crc kubenswrapper[4792]: I0319 17:23:35.716340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" event={"ID":"e472cc5f-822a-4e33-8f16-04cc02cbae89","Type":"ContainerDied","Data":"4006bca9452f55ffc6546fb8744843fb5dc8e249566612ae94009c7a4feb67a9"} Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.230298 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.373916 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-inventory\") pod \"e472cc5f-822a-4e33-8f16-04cc02cbae89\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.374168 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ssh-key-openstack-edpm-ipam\") pod \"e472cc5f-822a-4e33-8f16-04cc02cbae89\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.374492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovncontroller-config-0\") pod \"e472cc5f-822a-4e33-8f16-04cc02cbae89\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.374605 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovn-combined-ca-bundle\") pod \"e472cc5f-822a-4e33-8f16-04cc02cbae89\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.374775 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4qnw\" (UniqueName: \"kubernetes.io/projected/e472cc5f-822a-4e33-8f16-04cc02cbae89-kube-api-access-f4qnw\") pod \"e472cc5f-822a-4e33-8f16-04cc02cbae89\" (UID: \"e472cc5f-822a-4e33-8f16-04cc02cbae89\") " Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.394291 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e472cc5f-822a-4e33-8f16-04cc02cbae89" (UID: "e472cc5f-822a-4e33-8f16-04cc02cbae89"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.394991 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e472cc5f-822a-4e33-8f16-04cc02cbae89-kube-api-access-f4qnw" (OuterVolumeSpecName: "kube-api-access-f4qnw") pod "e472cc5f-822a-4e33-8f16-04cc02cbae89" (UID: "e472cc5f-822a-4e33-8f16-04cc02cbae89"). InnerVolumeSpecName "kube-api-access-f4qnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.404807 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e472cc5f-822a-4e33-8f16-04cc02cbae89" (UID: "e472cc5f-822a-4e33-8f16-04cc02cbae89"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.405773 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-inventory" (OuterVolumeSpecName: "inventory") pod "e472cc5f-822a-4e33-8f16-04cc02cbae89" (UID: "e472cc5f-822a-4e33-8f16-04cc02cbae89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.405930 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e472cc5f-822a-4e33-8f16-04cc02cbae89" (UID: "e472cc5f-822a-4e33-8f16-04cc02cbae89"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.479131 4792 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.479169 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.479181 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4qnw\" (UniqueName: \"kubernetes.io/projected/e472cc5f-822a-4e33-8f16-04cc02cbae89-kube-api-access-f4qnw\") on node \"crc\" DevicePath \"\"" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.479189 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.479199 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e472cc5f-822a-4e33-8f16-04cc02cbae89-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.750035 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.758234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mhcn" event={"ID":"e472cc5f-822a-4e33-8f16-04cc02cbae89","Type":"ContainerDied","Data":"c1cf98840acf267700c54013bb299a962e22d6f47a500d50e90266e4d15970b4"} Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.758303 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cf98840acf267700c54013bb299a962e22d6f47a500d50e90266e4d15970b4" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.959823 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d"] Mar 19 17:23:37 crc kubenswrapper[4792]: E0319 17:23:37.960490 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472cc5f-822a-4e33-8f16-04cc02cbae89" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.960504 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472cc5f-822a-4e33-8f16-04cc02cbae89" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.960744 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e472cc5f-822a-4e33-8f16-04cc02cbae89" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.961552 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.964022 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.964375 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.964809 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.967588 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.971056 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.971258 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.974553 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d"] Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.990615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.990926 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.991036 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx82p\" (UniqueName: \"kubernetes.io/projected/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-kube-api-access-hx82p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.991154 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.991256 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:37 crc kubenswrapper[4792]: I0319 17:23:37.991342 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.094623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx82p\" (UniqueName: \"kubernetes.io/projected/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-kube-api-access-hx82p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.094742 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.094790 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.094828 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.094941 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.094989 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.098547 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.098976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.099587 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.100372 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.102279 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.113553 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx82p\" (UniqueName: \"kubernetes.io/projected/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-kube-api-access-hx82p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.280455 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:23:38 crc kubenswrapper[4792]: I0319 17:23:38.839379 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d"] Mar 19 17:23:39 crc kubenswrapper[4792]: I0319 17:23:39.765564 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" event={"ID":"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a","Type":"ContainerStarted","Data":"14c3a59e00e44bf49b58664027c081f77bca1450563b63927a203f18758ceec9"} Mar 19 17:23:40 crc kubenswrapper[4792]: I0319 17:23:40.740064 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:23:40 crc kubenswrapper[4792]: E0319 17:23:40.740629 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:23:40 crc kubenswrapper[4792]: I0319 17:23:40.778164 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" event={"ID":"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a","Type":"ContainerStarted","Data":"ab11ca63195bc9c85df68ad95588e29b573ac61b58de87021901bb2055d235c5"} Mar 19 17:23:40 crc kubenswrapper[4792]: I0319 17:23:40.809118 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" podStartSLOduration=3.014045708 podStartE2EDuration="3.809099302s" podCreationTimestamp="2026-03-19 17:23:37 +0000 UTC" firstStartedPulling="2026-03-19 17:23:38.846463435 +0000 UTC m=+2581.992520985" lastFinishedPulling="2026-03-19 17:23:39.641517009 +0000 UTC m=+2582.787574579" observedRunningTime="2026-03-19 17:23:40.796382545 +0000 UTC m=+2583.942440075" watchObservedRunningTime="2026-03-19 17:23:40.809099302 +0000 UTC m=+2583.955156842" Mar 19 17:23:43 crc kubenswrapper[4792]: E0319 17:23:43.101695 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:23:46 crc kubenswrapper[4792]: E0319 17:23:46.488605 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:23:48 crc kubenswrapper[4792]: E0319 17:23:48.246462 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:23:48 crc kubenswrapper[4792]: E0319 17:23:48.246468 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:23:52 crc kubenswrapper[4792]: I0319 17:23:52.739625 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:23:53 crc kubenswrapper[4792]: I0319 17:23:53.929482 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"c4660cba6f84428e58bbe76b84df53f5ea443faba3710398e6db4dd8c6f3ef06"} Mar 19 17:23:56 crc kubenswrapper[4792]: E0319 17:23:56.785960 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:23:57 crc kubenswrapper[4792]: E0319 17:23:57.869746 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.208276 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565684-4bhzd"] Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.210659 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565684-4bhzd" Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.213348 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.213474 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.217263 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.225314 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565684-4bhzd"] Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.342721 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w5qd\" (UniqueName: \"kubernetes.io/projected/ad7114ba-b847-4b00-b712-58f699f99544-kube-api-access-7w5qd\") pod \"auto-csr-approver-29565684-4bhzd\" (UID: \"ad7114ba-b847-4b00-b712-58f699f99544\") " pod="openshift-infra/auto-csr-approver-29565684-4bhzd" Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.468265 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w5qd\" (UniqueName: \"kubernetes.io/projected/ad7114ba-b847-4b00-b712-58f699f99544-kube-api-access-7w5qd\") pod \"auto-csr-approver-29565684-4bhzd\" (UID: \"ad7114ba-b847-4b00-b712-58f699f99544\") " pod="openshift-infra/auto-csr-approver-29565684-4bhzd" Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.488513 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w5qd\" (UniqueName: \"kubernetes.io/projected/ad7114ba-b847-4b00-b712-58f699f99544-kube-api-access-7w5qd\") pod \"auto-csr-approver-29565684-4bhzd\" (UID: \"ad7114ba-b847-4b00-b712-58f699f99544\") " pod="openshift-infra/auto-csr-approver-29565684-4bhzd" Mar 19 17:24:00 crc kubenswrapper[4792]: I0319 17:24:00.534342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565684-4bhzd" Mar 19 17:24:01 crc kubenswrapper[4792]: I0319 17:24:01.075427 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565684-4bhzd"] Mar 19 17:24:01 crc kubenswrapper[4792]: I0319 17:24:01.142381 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565684-4bhzd" event={"ID":"ad7114ba-b847-4b00-b712-58f699f99544","Type":"ContainerStarted","Data":"49212fe5d8832177a702da2f650564187e6b056b95dc5911728e81fcdc716f10"} Mar 19 17:24:03 crc kubenswrapper[4792]: I0319 17:24:03.168713 4792 generic.go:334] "Generic (PLEG): container finished" podID="ad7114ba-b847-4b00-b712-58f699f99544" containerID="0cd1c5cabe3711b55d5841a4616b4947c528535190728dc10709624c481de5a9" exitCode=0 Mar 19 17:24:03 crc kubenswrapper[4792]: I0319 17:24:03.168825 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565684-4bhzd" event={"ID":"ad7114ba-b847-4b00-b712-58f699f99544","Type":"ContainerDied","Data":"0cd1c5cabe3711b55d5841a4616b4947c528535190728dc10709624c481de5a9"} Mar 19 17:24:04 crc kubenswrapper[4792]: I0319 17:24:04.515242 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565684-4bhzd" Mar 19 17:24:04 crc kubenswrapper[4792]: I0319 17:24:04.670191 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w5qd\" (UniqueName: \"kubernetes.io/projected/ad7114ba-b847-4b00-b712-58f699f99544-kube-api-access-7w5qd\") pod \"ad7114ba-b847-4b00-b712-58f699f99544\" (UID: \"ad7114ba-b847-4b00-b712-58f699f99544\") " Mar 19 17:24:04 crc kubenswrapper[4792]: I0319 17:24:04.685581 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7114ba-b847-4b00-b712-58f699f99544-kube-api-access-7w5qd" (OuterVolumeSpecName: "kube-api-access-7w5qd") pod "ad7114ba-b847-4b00-b712-58f699f99544" (UID: "ad7114ba-b847-4b00-b712-58f699f99544"). InnerVolumeSpecName "kube-api-access-7w5qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:24:04 crc kubenswrapper[4792]: I0319 17:24:04.772776 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w5qd\" (UniqueName: \"kubernetes.io/projected/ad7114ba-b847-4b00-b712-58f699f99544-kube-api-access-7w5qd\") on node \"crc\" DevicePath \"\"" Mar 19 17:24:05 crc kubenswrapper[4792]: I0319 17:24:05.194730 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565684-4bhzd" event={"ID":"ad7114ba-b847-4b00-b712-58f699f99544","Type":"ContainerDied","Data":"49212fe5d8832177a702da2f650564187e6b056b95dc5911728e81fcdc716f10"} Mar 19 17:24:05 crc kubenswrapper[4792]: I0319 17:24:05.194778 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49212fe5d8832177a702da2f650564187e6b056b95dc5911728e81fcdc716f10" Mar 19 17:24:05 crc kubenswrapper[4792]: I0319 17:24:05.194985 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565684-4bhzd" Mar 19 17:24:05 crc kubenswrapper[4792]: I0319 17:24:05.605081 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565678-kbpzb"] Mar 19 17:24:05 crc kubenswrapper[4792]: I0319 17:24:05.618445 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565678-kbpzb"] Mar 19 17:24:05 crc kubenswrapper[4792]: I0319 17:24:05.753749 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01596d95-4a47-495b-8ba8-d62187e696ee" path="/var/lib/kubelet/pods/01596d95-4a47-495b-8ba8-d62187e696ee/volumes" Mar 19 17:24:06 crc kubenswrapper[4792]: I0319 17:24:06.824264 4792 scope.go:117] "RemoveContainer" containerID="a93f0052a2f257e72063fc338ca14da77f58ffa44903abf86bf49def745d65a9" Mar 19 17:24:06 crc kubenswrapper[4792]: E0319 17:24:06.851290 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:24:06 crc kubenswrapper[4792]: I0319 17:24:06.860603 4792 scope.go:117] "RemoveContainer" containerID="2131b9688aa0bad94912a221c649c100da5c89e5a44f6ba0fa788a344144747c" Mar 19 17:24:06 crc kubenswrapper[4792]: I0319 17:24:06.907645 4792 scope.go:117] "RemoveContainer" containerID="bcf0d30310f5467c50f4c960a522289cb71da6c7f7f8bdc64eb5655614c3cde7" Mar 19 17:24:06 crc kubenswrapper[4792]: I0319 17:24:06.959721 4792 scope.go:117] "RemoveContainer" containerID="7b5c1f04ccc52d5ca95ea356f68ae069515015264923d7a8972181b8f60a0bcf" Mar 19 17:24:13 crc kubenswrapper[4792]: E0319 17:24:13.132315 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:24:16 crc kubenswrapper[4792]: E0319 17:24:16.900745 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:24:25 crc kubenswrapper[4792]: I0319 17:24:25.418173 4792 generic.go:334] "Generic (PLEG): container finished" podID="9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" containerID="ab11ca63195bc9c85df68ad95588e29b573ac61b58de87021901bb2055d235c5" exitCode=0 Mar 19 17:24:25 crc kubenswrapper[4792]: I0319 17:24:25.418257 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" event={"ID":"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a","Type":"ContainerDied","Data":"ab11ca63195bc9c85df68ad95588e29b573ac61b58de87021901bb2055d235c5"} Mar 19 17:24:26 crc kubenswrapper[4792]: I0319 17:24:26.994376 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.137137 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.137574 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx82p\" (UniqueName: \"kubernetes.io/projected/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-kube-api-access-hx82p\") pod \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.137768 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-inventory\") pod \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.138005 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-nova-metadata-neutron-config-0\") pod \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.138392 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-metadata-combined-ca-bundle\") pod \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.138624 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-ssh-key-openstack-edpm-ipam\") pod \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\" (UID: \"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a\") " Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.149581 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-kube-api-access-hx82p" (OuterVolumeSpecName: "kube-api-access-hx82p") pod "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" (UID: "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a"). InnerVolumeSpecName "kube-api-access-hx82p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.159122 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" (UID: "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.175383 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" (UID: "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.182567 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" (UID: "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.184762 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" (UID: "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:24:27 crc kubenswrapper[4792]: E0319 17:24:27.196952 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.207195 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-inventory" (OuterVolumeSpecName: "inventory") pod "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" (UID: "9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.249596 4792 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.249642 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx82p\" (UniqueName: \"kubernetes.io/projected/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-kube-api-access-hx82p\") on node \"crc\" DevicePath \"\"" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.249657 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.249669 4792 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.249689 4792 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.249703 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.444750 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" event={"ID":"9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a","Type":"ContainerDied","Data":"14c3a59e00e44bf49b58664027c081f77bca1450563b63927a203f18758ceec9"} Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.445113 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c3a59e00e44bf49b58664027c081f77bca1450563b63927a203f18758ceec9" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.444958 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.589099 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4"] Mar 19 17:24:27 crc kubenswrapper[4792]: E0319 17:24:27.589584 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7114ba-b847-4b00-b712-58f699f99544" containerName="oc" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.589602 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7114ba-b847-4b00-b712-58f699f99544" containerName="oc" Mar 19 17:24:27 crc kubenswrapper[4792]: E0319 17:24:27.589632 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.589640 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.589872 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.589897 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7114ba-b847-4b00-b712-58f699f99544" containerName="oc" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.590672 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.593296 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.594383 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.594620 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.595224 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.599325 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.604235 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4"] Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.659563 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.659663 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.659691 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.659754 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ck2l\" (UniqueName: \"kubernetes.io/projected/697a022e-dbca-47a6-9034-353e9a5cecde-kube-api-access-5ck2l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.659789 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.761880 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.761986 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.762014 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.762047 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ck2l\" (UniqueName: \"kubernetes.io/projected/697a022e-dbca-47a6-9034-353e9a5cecde-kube-api-access-5ck2l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.762076 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.770611 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.770638 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.771314 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.773607 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.800565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ck2l\" (UniqueName: \"kubernetes.io/projected/697a022e-dbca-47a6-9034-353e9a5cecde-kube-api-access-5ck2l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:27 crc kubenswrapper[4792]: E0319 17:24:27.874969 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:24:27 crc kubenswrapper[4792]: I0319 17:24:27.907937 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:24:28 crc kubenswrapper[4792]: I0319 17:24:28.439991 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4"] Mar 19 17:24:29 crc kubenswrapper[4792]: I0319 17:24:29.467624 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" event={"ID":"697a022e-dbca-47a6-9034-353e9a5cecde","Type":"ContainerStarted","Data":"41050a83d05d236573e9196e207434c519fdfc92e51d6b4caf40ce7388803416"} Mar 19 17:24:29 crc kubenswrapper[4792]: I0319 17:24:29.468187 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" event={"ID":"697a022e-dbca-47a6-9034-353e9a5cecde","Type":"ContainerStarted","Data":"b117e75d1687d4e6a025c55f11d331f54bb838b2c99c892a5af97677f1f709e3"} Mar 19 17:24:29 crc kubenswrapper[4792]: I0319 17:24:29.501055 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" podStartSLOduration=1.839057701 podStartE2EDuration="2.500989495s" podCreationTimestamp="2026-03-19 17:24:27 +0000 UTC" firstStartedPulling="2026-03-19 17:24:28.449955869 +0000 UTC m=+2631.596013409" lastFinishedPulling="2026-03-19 17:24:29.111887653 +0000 UTC m=+2632.257945203" observedRunningTime="2026-03-19 17:24:29.490422436 +0000 UTC m=+2632.636479976" watchObservedRunningTime="2026-03-19 17:24:29.500989495 +0000 UTC m=+2632.647047035" Mar 19 17:24:37 crc kubenswrapper[4792]: E0319 17:24:37.498230 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472cc5f_822a_4e33_8f16_04cc02cbae89.slice\": RecentStats: unable to find data in memory cache]" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.144659 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565686-2mdj7"] Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.146863 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565686-2mdj7" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.149819 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.149830 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.150103 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.155953 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565686-2mdj7"] Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.268770 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlrd\" (UniqueName: \"kubernetes.io/projected/96744e0b-5d22-4da8-b394-4f566d114a8b-kube-api-access-cxlrd\") pod \"auto-csr-approver-29565686-2mdj7\" (UID: \"96744e0b-5d22-4da8-b394-4f566d114a8b\") " pod="openshift-infra/auto-csr-approver-29565686-2mdj7" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.371059 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxlrd\" (UniqueName: \"kubernetes.io/projected/96744e0b-5d22-4da8-b394-4f566d114a8b-kube-api-access-cxlrd\") pod \"auto-csr-approver-29565686-2mdj7\" (UID: \"96744e0b-5d22-4da8-b394-4f566d114a8b\") " pod="openshift-infra/auto-csr-approver-29565686-2mdj7" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.389564 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxlrd\" (UniqueName: \"kubernetes.io/projected/96744e0b-5d22-4da8-b394-4f566d114a8b-kube-api-access-cxlrd\") pod \"auto-csr-approver-29565686-2mdj7\" (UID: \"96744e0b-5d22-4da8-b394-4f566d114a8b\") " pod="openshift-infra/auto-csr-approver-29565686-2mdj7" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.473458 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565686-2mdj7" Mar 19 17:26:00 crc kubenswrapper[4792]: I0319 17:26:00.969394 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565686-2mdj7"] Mar 19 17:26:01 crc kubenswrapper[4792]: I0319 17:26:01.578797 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565686-2mdj7" event={"ID":"96744e0b-5d22-4da8-b394-4f566d114a8b","Type":"ContainerStarted","Data":"8fc8a7d21a7a170d9b410989b2c1866a7227ba5196fea643e0d74155eec8b589"} Mar 19 17:26:02 crc kubenswrapper[4792]: I0319 17:26:02.599484 4792 generic.go:334] "Generic (PLEG): container finished" podID="96744e0b-5d22-4da8-b394-4f566d114a8b" containerID="dd63e9e05afc59120cc8757dea8508d61eef9670a819f9eed7c63a13c41b0cd2" exitCode=0 Mar 19 17:26:02 crc kubenswrapper[4792]: I0319 17:26:02.600394 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565686-2mdj7" event={"ID":"96744e0b-5d22-4da8-b394-4f566d114a8b","Type":"ContainerDied","Data":"dd63e9e05afc59120cc8757dea8508d61eef9670a819f9eed7c63a13c41b0cd2"} Mar 19 17:26:04 crc kubenswrapper[4792]: I0319 17:26:04.075060 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565686-2mdj7" Mar 19 17:26:04 crc kubenswrapper[4792]: I0319 17:26:04.183284 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxlrd\" (UniqueName: \"kubernetes.io/projected/96744e0b-5d22-4da8-b394-4f566d114a8b-kube-api-access-cxlrd\") pod \"96744e0b-5d22-4da8-b394-4f566d114a8b\" (UID: \"96744e0b-5d22-4da8-b394-4f566d114a8b\") " Mar 19 17:26:04 crc kubenswrapper[4792]: I0319 17:26:04.191173 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96744e0b-5d22-4da8-b394-4f566d114a8b-kube-api-access-cxlrd" (OuterVolumeSpecName: "kube-api-access-cxlrd") pod "96744e0b-5d22-4da8-b394-4f566d114a8b" (UID: "96744e0b-5d22-4da8-b394-4f566d114a8b"). InnerVolumeSpecName "kube-api-access-cxlrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:26:04 crc kubenswrapper[4792]: I0319 17:26:04.286601 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxlrd\" (UniqueName: \"kubernetes.io/projected/96744e0b-5d22-4da8-b394-4f566d114a8b-kube-api-access-cxlrd\") on node \"crc\" DevicePath \"\"" Mar 19 17:26:04 crc kubenswrapper[4792]: I0319 17:26:04.623650 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565686-2mdj7" event={"ID":"96744e0b-5d22-4da8-b394-4f566d114a8b","Type":"ContainerDied","Data":"8fc8a7d21a7a170d9b410989b2c1866a7227ba5196fea643e0d74155eec8b589"} Mar 19 17:26:04 crc kubenswrapper[4792]: I0319 17:26:04.623697 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fc8a7d21a7a170d9b410989b2c1866a7227ba5196fea643e0d74155eec8b589" Mar 19 17:26:04 crc kubenswrapper[4792]: I0319 17:26:04.623728 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565686-2mdj7" Mar 19 17:26:05 crc kubenswrapper[4792]: I0319 17:26:05.142449 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565680-xkcn4"] Mar 19 17:26:05 crc kubenswrapper[4792]: I0319 17:26:05.158883 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565680-xkcn4"] Mar 19 17:26:05 crc kubenswrapper[4792]: I0319 17:26:05.754134 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7cffd0-73b4-4d98-9ca5-a9884daad11f" path="/var/lib/kubelet/pods/ec7cffd0-73b4-4d98-9ca5-a9884daad11f/volumes" Mar 19 17:26:07 crc kubenswrapper[4792]: I0319 17:26:07.075511 4792 scope.go:117] "RemoveContainer" containerID="0d2a77f16e50f698c8000bb2b69fbcccffe158169cb64ba9f3fe23c1d2f7535d" Mar 19 17:26:20 crc kubenswrapper[4792]: I0319 17:26:20.230767 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:26:20 crc kubenswrapper[4792]: I0319 17:26:20.231428 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:26:50 crc kubenswrapper[4792]: I0319 17:26:50.231273 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:26:50 crc kubenswrapper[4792]: I0319 17:26:50.231792 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:27:20 crc kubenswrapper[4792]: I0319 17:27:20.230816 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:27:20 crc kubenswrapper[4792]: I0319 17:27:20.231593 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:27:20 crc kubenswrapper[4792]: I0319 17:27:20.231662 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:27:20 crc kubenswrapper[4792]: I0319 17:27:20.233006 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4660cba6f84428e58bbe76b84df53f5ea443faba3710398e6db4dd8c6f3ef06"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:27:20 crc kubenswrapper[4792]: I0319 17:27:20.233148 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://c4660cba6f84428e58bbe76b84df53f5ea443faba3710398e6db4dd8c6f3ef06" gracePeriod=600 Mar 19 17:27:20 crc kubenswrapper[4792]: I0319 17:27:20.516646 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="c4660cba6f84428e58bbe76b84df53f5ea443faba3710398e6db4dd8c6f3ef06" exitCode=0 Mar 19 17:27:20 crc kubenswrapper[4792]: I0319 17:27:20.516714 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"c4660cba6f84428e58bbe76b84df53f5ea443faba3710398e6db4dd8c6f3ef06"} Mar 19 17:27:20 crc kubenswrapper[4792]: I0319 17:27:20.517010 4792 scope.go:117] "RemoveContainer" containerID="9242a1b866a0444d19bb931d1d3e2c7c50f9f60c9bc9d7ef5039bd972c5a77c6" Mar 19 17:27:21 crc kubenswrapper[4792]: I0319 17:27:21.531328 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629"} Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.165876 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565688-vdbql"] Mar 19 17:28:00 crc kubenswrapper[4792]: E0319 17:28:00.166951 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96744e0b-5d22-4da8-b394-4f566d114a8b" containerName="oc" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.166964 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="96744e0b-5d22-4da8-b394-4f566d114a8b" containerName="oc" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.167210 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="96744e0b-5d22-4da8-b394-4f566d114a8b" containerName="oc" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.168139 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565688-vdbql" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.172750 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.173067 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.173074 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.184862 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565688-vdbql"] Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.294925 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgn4b\" (UniqueName: \"kubernetes.io/projected/2c7b7252-20fe-4736-bbc6-39369d762541-kube-api-access-cgn4b\") pod \"auto-csr-approver-29565688-vdbql\" (UID: \"2c7b7252-20fe-4736-bbc6-39369d762541\") " pod="openshift-infra/auto-csr-approver-29565688-vdbql" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.397024 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgn4b\" (UniqueName: \"kubernetes.io/projected/2c7b7252-20fe-4736-bbc6-39369d762541-kube-api-access-cgn4b\") pod \"auto-csr-approver-29565688-vdbql\" (UID: \"2c7b7252-20fe-4736-bbc6-39369d762541\") " pod="openshift-infra/auto-csr-approver-29565688-vdbql" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.417060 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgn4b\" (UniqueName: \"kubernetes.io/projected/2c7b7252-20fe-4736-bbc6-39369d762541-kube-api-access-cgn4b\") pod \"auto-csr-approver-29565688-vdbql\" (UID: \"2c7b7252-20fe-4736-bbc6-39369d762541\") " pod="openshift-infra/auto-csr-approver-29565688-vdbql" Mar 19 17:28:00 crc kubenswrapper[4792]: I0319 17:28:00.496692 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565688-vdbql" Mar 19 17:28:01 crc kubenswrapper[4792]: I0319 17:28:01.003564 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565688-vdbql"] Mar 19 17:28:01 crc kubenswrapper[4792]: I0319 17:28:01.011771 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:28:02 crc kubenswrapper[4792]: I0319 17:28:02.009105 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565688-vdbql" event={"ID":"2c7b7252-20fe-4736-bbc6-39369d762541","Type":"ContainerStarted","Data":"9a043867c6f6520147706f855c09a3824bba049038192923d6cb0cfda0818311"} Mar 19 17:28:05 crc kubenswrapper[4792]: I0319 17:28:05.048470 4792 generic.go:334] "Generic (PLEG): container finished" podID="2c7b7252-20fe-4736-bbc6-39369d762541" containerID="27ce0575c7c8b031b47b1c55d05f02e73e537d6f78fe344890050adde0239517" exitCode=0 Mar 19 17:28:05 crc kubenswrapper[4792]: I0319 17:28:05.048560 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565688-vdbql" event={"ID":"2c7b7252-20fe-4736-bbc6-39369d762541","Type":"ContainerDied","Data":"27ce0575c7c8b031b47b1c55d05f02e73e537d6f78fe344890050adde0239517"} Mar 19 17:28:06 crc kubenswrapper[4792]: I0319 17:28:06.593307 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565688-vdbql" Mar 19 17:28:06 crc kubenswrapper[4792]: I0319 17:28:06.751701 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgn4b\" (UniqueName: \"kubernetes.io/projected/2c7b7252-20fe-4736-bbc6-39369d762541-kube-api-access-cgn4b\") pod \"2c7b7252-20fe-4736-bbc6-39369d762541\" (UID: \"2c7b7252-20fe-4736-bbc6-39369d762541\") " Mar 19 17:28:06 crc kubenswrapper[4792]: I0319 17:28:06.758099 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7b7252-20fe-4736-bbc6-39369d762541-kube-api-access-cgn4b" (OuterVolumeSpecName: "kube-api-access-cgn4b") pod "2c7b7252-20fe-4736-bbc6-39369d762541" (UID: "2c7b7252-20fe-4736-bbc6-39369d762541"). InnerVolumeSpecName "kube-api-access-cgn4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:28:06 crc kubenswrapper[4792]: I0319 17:28:06.855097 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgn4b\" (UniqueName: \"kubernetes.io/projected/2c7b7252-20fe-4736-bbc6-39369d762541-kube-api-access-cgn4b\") on node \"crc\" DevicePath \"\"" Mar 19 17:28:07 crc kubenswrapper[4792]: I0319 17:28:07.088279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565688-vdbql" event={"ID":"2c7b7252-20fe-4736-bbc6-39369d762541","Type":"ContainerDied","Data":"9a043867c6f6520147706f855c09a3824bba049038192923d6cb0cfda0818311"} Mar 19 17:28:07 crc kubenswrapper[4792]: I0319 17:28:07.088336 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a043867c6f6520147706f855c09a3824bba049038192923d6cb0cfda0818311" Mar 19 17:28:07 crc kubenswrapper[4792]: I0319 17:28:07.088435 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565688-vdbql" Mar 19 17:28:07 crc kubenswrapper[4792]: I0319 17:28:07.680713 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565682-ntsm2"] Mar 19 17:28:07 crc kubenswrapper[4792]: I0319 17:28:07.694981 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565682-ntsm2"] Mar 19 17:28:07 crc kubenswrapper[4792]: I0319 17:28:07.752706 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71ac1bc-4e86-4aea-8c12-2925090c5b44" path="/var/lib/kubelet/pods/b71ac1bc-4e86-4aea-8c12-2925090c5b44/volumes" Mar 19 17:28:13 crc kubenswrapper[4792]: I0319 17:28:13.172940 4792 generic.go:334] "Generic (PLEG): container finished" podID="697a022e-dbca-47a6-9034-353e9a5cecde" containerID="41050a83d05d236573e9196e207434c519fdfc92e51d6b4caf40ce7388803416" exitCode=0 Mar 19 17:28:13 crc kubenswrapper[4792]: I0319 17:28:13.173011 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" event={"ID":"697a022e-dbca-47a6-9034-353e9a5cecde","Type":"ContainerDied","Data":"41050a83d05d236573e9196e207434c519fdfc92e51d6b4caf40ce7388803416"} Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.744747 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.846698 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-combined-ca-bundle\") pod \"697a022e-dbca-47a6-9034-353e9a5cecde\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.846871 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-inventory\") pod \"697a022e-dbca-47a6-9034-353e9a5cecde\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.846981 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-secret-0\") pod \"697a022e-dbca-47a6-9034-353e9a5cecde\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.847041 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ck2l\" (UniqueName: \"kubernetes.io/projected/697a022e-dbca-47a6-9034-353e9a5cecde-kube-api-access-5ck2l\") pod \"697a022e-dbca-47a6-9034-353e9a5cecde\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.847088 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-ssh-key-openstack-edpm-ipam\") pod \"697a022e-dbca-47a6-9034-353e9a5cecde\" (UID: \"697a022e-dbca-47a6-9034-353e9a5cecde\") " Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.853166 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "697a022e-dbca-47a6-9034-353e9a5cecde" (UID: "697a022e-dbca-47a6-9034-353e9a5cecde"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.853373 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697a022e-dbca-47a6-9034-353e9a5cecde-kube-api-access-5ck2l" (OuterVolumeSpecName: "kube-api-access-5ck2l") pod "697a022e-dbca-47a6-9034-353e9a5cecde" (UID: "697a022e-dbca-47a6-9034-353e9a5cecde"). InnerVolumeSpecName "kube-api-access-5ck2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.882620 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "697a022e-dbca-47a6-9034-353e9a5cecde" (UID: "697a022e-dbca-47a6-9034-353e9a5cecde"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.892865 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "697a022e-dbca-47a6-9034-353e9a5cecde" (UID: "697a022e-dbca-47a6-9034-353e9a5cecde"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.905127 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-inventory" (OuterVolumeSpecName: "inventory") pod "697a022e-dbca-47a6-9034-353e9a5cecde" (UID: "697a022e-dbca-47a6-9034-353e9a5cecde"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.950347 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.950391 4792 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.950405 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.950418 4792 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/697a022e-dbca-47a6-9034-353e9a5cecde-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:28:14 crc kubenswrapper[4792]: I0319 17:28:14.950431 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ck2l\" (UniqueName: \"kubernetes.io/projected/697a022e-dbca-47a6-9034-353e9a5cecde-kube-api-access-5ck2l\") on node \"crc\" DevicePath \"\"" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.193145 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" event={"ID":"697a022e-dbca-47a6-9034-353e9a5cecde","Type":"ContainerDied","Data":"b117e75d1687d4e6a025c55f11d331f54bb838b2c99c892a5af97677f1f709e3"} Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.193433 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b117e75d1687d4e6a025c55f11d331f54bb838b2c99c892a5af97677f1f709e3" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.193267 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.292292 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck"] Mar 19 17:28:15 crc kubenswrapper[4792]: E0319 17:28:15.292773 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697a022e-dbca-47a6-9034-353e9a5cecde" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.292793 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="697a022e-dbca-47a6-9034-353e9a5cecde" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 19 17:28:15 crc kubenswrapper[4792]: E0319 17:28:15.292831 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7b7252-20fe-4736-bbc6-39369d762541" containerName="oc" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.292852 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7b7252-20fe-4736-bbc6-39369d762541" containerName="oc" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.293088 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7b7252-20fe-4736-bbc6-39369d762541" containerName="oc" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.293121 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="697a022e-dbca-47a6-9034-353e9a5cecde" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.293945 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.296490 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.296504 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.296772 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.296918 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.297056 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.297101 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.297108 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.315052 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck"] Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.461556 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.461605 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2944\" (UniqueName: \"kubernetes.io/projected/93d49310-d5d8-4e87-9162-296093e9adc5-kube-api-access-r2944\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.461633 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.461731 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.461771 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.461802 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.461834 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.461906 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.462028 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.462096 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.462368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93d49310-d5d8-4e87-9162-296093e9adc5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564474 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93d49310-d5d8-4e87-9162-296093e9adc5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564539 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564564 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2944\" (UniqueName: \"kubernetes.io/projected/93d49310-d5d8-4e87-9162-296093e9adc5-kube-api-access-r2944\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564587 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564645 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564682 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564713 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564739 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.564831 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.565368 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93d49310-d5d8-4e87-9162-296093e9adc5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.569713 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.569737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.570439 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.571054 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.571403 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.571641 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.571643 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.572855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.578404 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.583112 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2944\" (UniqueName: \"kubernetes.io/projected/93d49310-d5d8-4e87-9162-296093e9adc5-kube-api-access-r2944\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xcfck\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:15 crc kubenswrapper[4792]: I0319 17:28:15.615445 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:28:16 crc kubenswrapper[4792]: I0319 17:28:16.173046 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck"] Mar 19 17:28:16 crc kubenswrapper[4792]: I0319 17:28:16.209127 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" event={"ID":"93d49310-d5d8-4e87-9162-296093e9adc5","Type":"ContainerStarted","Data":"956852f75a36c32ead60980f4658e3e070ac49d57673c5007fe7ab47f952cdad"} Mar 19 17:28:17 crc kubenswrapper[4792]: I0319 17:28:17.220764 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" event={"ID":"93d49310-d5d8-4e87-9162-296093e9adc5","Type":"ContainerStarted","Data":"ddce76d68160a018cc35893ea1de5271e0912e3bf2d27334f6fafbd4d64e8e2c"} Mar 19 17:28:17 crc kubenswrapper[4792]: I0319 17:28:17.242421 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" podStartSLOduration=1.75193034 podStartE2EDuration="2.242405325s" podCreationTimestamp="2026-03-19 17:28:15 +0000 UTC" firstStartedPulling="2026-03-19 17:28:16.177247763 +0000 UTC m=+2859.323305303" lastFinishedPulling="2026-03-19 17:28:16.667722738 +0000 UTC m=+2859.813780288" observedRunningTime="2026-03-19 17:28:17.23633984 +0000 UTC m=+2860.382397380" watchObservedRunningTime="2026-03-19 17:28:17.242405325 +0000 UTC m=+2860.388462865" Mar 19 17:29:07 crc kubenswrapper[4792]: I0319 17:29:07.225169 4792 scope.go:117] "RemoveContainer" containerID="19a22a7c9ff093af88c1f5badba413c1b1a7be745bb05f0f1d7bb9bfbed1a47c" Mar 19 17:29:20 crc kubenswrapper[4792]: I0319 17:29:20.230880 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:29:20 crc kubenswrapper[4792]: I0319 17:29:20.231452 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:29:50 crc kubenswrapper[4792]: I0319 17:29:50.230757 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:29:50 crc kubenswrapper[4792]: I0319 17:29:50.231933 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.507380 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kz9qb"] Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.511024 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.557704 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz9qb"] Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.630460 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-utilities\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.630641 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjnhv\" (UniqueName: \"kubernetes.io/projected/abcbdaea-028e-40a0-9588-22aa8a06eeda-kube-api-access-tjnhv\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.630739 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-catalog-content\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.733390 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-utilities\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.733522 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjnhv\" (UniqueName: \"kubernetes.io/projected/abcbdaea-028e-40a0-9588-22aa8a06eeda-kube-api-access-tjnhv\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.733561 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-catalog-content\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.734011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-utilities\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.734126 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-catalog-content\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.753834 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjnhv\" (UniqueName: \"kubernetes.io/projected/abcbdaea-028e-40a0-9588-22aa8a06eeda-kube-api-access-tjnhv\") pod \"certified-operators-kz9qb\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:29:59 crc kubenswrapper[4792]: I0319 17:29:59.831241 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.167611 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565690-vvwvh"] Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.176431 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565690-vvwvh" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.181868 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm"] Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.185263 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.185399 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.186388 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.186636 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.188408 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.190054 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.201487 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565690-vvwvh"] Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.217582 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm"] Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.249324 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbx74\" (UniqueName: \"kubernetes.io/projected/4368e7ae-b63b-42a2-87e0-455db4edac41-kube-api-access-hbx74\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.249388 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4368e7ae-b63b-42a2-87e0-455db4edac41-secret-volume\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.249457 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chdbq\" (UniqueName: \"kubernetes.io/projected/dad7fdd8-b937-4a65-9541-524c61e7daf5-kube-api-access-chdbq\") pod \"auto-csr-approver-29565690-vvwvh\" (UID: \"dad7fdd8-b937-4a65-9541-524c61e7daf5\") " pod="openshift-infra/auto-csr-approver-29565690-vvwvh" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.249643 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4368e7ae-b63b-42a2-87e0-455db4edac41-config-volume\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.334589 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz9qb"] Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.352721 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbx74\" (UniqueName: \"kubernetes.io/projected/4368e7ae-b63b-42a2-87e0-455db4edac41-kube-api-access-hbx74\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.352834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4368e7ae-b63b-42a2-87e0-455db4edac41-secret-volume\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.355041 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chdbq\" (UniqueName: \"kubernetes.io/projected/dad7fdd8-b937-4a65-9541-524c61e7daf5-kube-api-access-chdbq\") pod \"auto-csr-approver-29565690-vvwvh\" (UID: \"dad7fdd8-b937-4a65-9541-524c61e7daf5\") " pod="openshift-infra/auto-csr-approver-29565690-vvwvh" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.356333 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4368e7ae-b63b-42a2-87e0-455db4edac41-config-volume\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.357264 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4368e7ae-b63b-42a2-87e0-455db4edac41-config-volume\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.359716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4368e7ae-b63b-42a2-87e0-455db4edac41-secret-volume\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.372129 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chdbq\" (UniqueName: \"kubernetes.io/projected/dad7fdd8-b937-4a65-9541-524c61e7daf5-kube-api-access-chdbq\") pod \"auto-csr-approver-29565690-vvwvh\" (UID: \"dad7fdd8-b937-4a65-9541-524c61e7daf5\") " pod="openshift-infra/auto-csr-approver-29565690-vvwvh" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.374074 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbx74\" (UniqueName: \"kubernetes.io/projected/4368e7ae-b63b-42a2-87e0-455db4edac41-kube-api-access-hbx74\") pod \"collect-profiles-29565690-ntstm\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.521315 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565690-vvwvh" Mar 19 17:30:00 crc kubenswrapper[4792]: I0319 17:30:00.534346 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:01 crc kubenswrapper[4792]: I0319 17:30:01.149427 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565690-vvwvh"] Mar 19 17:30:01 crc kubenswrapper[4792]: I0319 17:30:01.237007 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm"] Mar 19 17:30:01 crc kubenswrapper[4792]: I0319 17:30:01.273055 4792 generic.go:334] "Generic (PLEG): container finished" podID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerID="58fa7d58b37ce3cbaddc30cc73e9a11718bdab4deb512f4689dd6fd112bc0332" exitCode=0 Mar 19 17:30:01 crc kubenswrapper[4792]: I0319 17:30:01.273136 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qb" event={"ID":"abcbdaea-028e-40a0-9588-22aa8a06eeda","Type":"ContainerDied","Data":"58fa7d58b37ce3cbaddc30cc73e9a11718bdab4deb512f4689dd6fd112bc0332"} Mar 19 17:30:01 crc kubenswrapper[4792]: I0319 17:30:01.273163 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qb" event={"ID":"abcbdaea-028e-40a0-9588-22aa8a06eeda","Type":"ContainerStarted","Data":"54b3e5db3433b47fdc3cae997fb4e568a30fe659ad36d6cd44d12a10fcb56dd6"} Mar 19 17:30:01 crc kubenswrapper[4792]: I0319 17:30:01.275285 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" event={"ID":"4368e7ae-b63b-42a2-87e0-455db4edac41","Type":"ContainerStarted","Data":"e8a551ef6f139454d88c6da146a9aa0a624ccc08dd232b970403ce4164c1f478"} Mar 19 17:30:01 crc kubenswrapper[4792]: I0319 17:30:01.276917 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565690-vvwvh" event={"ID":"dad7fdd8-b937-4a65-9541-524c61e7daf5","Type":"ContainerStarted","Data":"67eae7e06f9dfb095f521cd3293548f528d19dcd625f8f1d79c707a43aa720f8"} Mar 19 17:30:02 crc kubenswrapper[4792]: I0319 17:30:02.296779 4792 generic.go:334] "Generic (PLEG): container finished" podID="4368e7ae-b63b-42a2-87e0-455db4edac41" containerID="881bd21c05d24c899b730735d9169ef0acccf40e2b144d985e916b824c71ee89" exitCode=0 Mar 19 17:30:02 crc kubenswrapper[4792]: I0319 17:30:02.296933 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" event={"ID":"4368e7ae-b63b-42a2-87e0-455db4edac41","Type":"ContainerDied","Data":"881bd21c05d24c899b730735d9169ef0acccf40e2b144d985e916b824c71ee89"} Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.309565 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qb" event={"ID":"abcbdaea-028e-40a0-9588-22aa8a06eeda","Type":"ContainerStarted","Data":"54b1e9b43674906fd174b8d40a2a09df7ec0d40224e0f7fc6319c3c1f172ec87"} Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.708821 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.862920 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4368e7ae-b63b-42a2-87e0-455db4edac41-secret-volume\") pod \"4368e7ae-b63b-42a2-87e0-455db4edac41\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.862995 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4368e7ae-b63b-42a2-87e0-455db4edac41-config-volume\") pod \"4368e7ae-b63b-42a2-87e0-455db4edac41\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.863096 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbx74\" (UniqueName: \"kubernetes.io/projected/4368e7ae-b63b-42a2-87e0-455db4edac41-kube-api-access-hbx74\") pod \"4368e7ae-b63b-42a2-87e0-455db4edac41\" (UID: \"4368e7ae-b63b-42a2-87e0-455db4edac41\") " Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.864827 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4368e7ae-b63b-42a2-87e0-455db4edac41-config-volume" (OuterVolumeSpecName: "config-volume") pod "4368e7ae-b63b-42a2-87e0-455db4edac41" (UID: "4368e7ae-b63b-42a2-87e0-455db4edac41"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.868756 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4368e7ae-b63b-42a2-87e0-455db4edac41-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4368e7ae-b63b-42a2-87e0-455db4edac41" (UID: "4368e7ae-b63b-42a2-87e0-455db4edac41"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.869348 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4368e7ae-b63b-42a2-87e0-455db4edac41-kube-api-access-hbx74" (OuterVolumeSpecName: "kube-api-access-hbx74") pod "4368e7ae-b63b-42a2-87e0-455db4edac41" (UID: "4368e7ae-b63b-42a2-87e0-455db4edac41"). InnerVolumeSpecName "kube-api-access-hbx74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.966075 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4368e7ae-b63b-42a2-87e0-455db4edac41-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.966136 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4368e7ae-b63b-42a2-87e0-455db4edac41-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:03 crc kubenswrapper[4792]: I0319 17:30:03.966149 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbx74\" (UniqueName: \"kubernetes.io/projected/4368e7ae-b63b-42a2-87e0-455db4edac41-kube-api-access-hbx74\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:04 crc kubenswrapper[4792]: I0319 17:30:04.322113 4792 generic.go:334] "Generic (PLEG): container finished" podID="dad7fdd8-b937-4a65-9541-524c61e7daf5" containerID="925dcdb03e2e983282e52d6e027a0d4f3e58e7bc77abc2484484191ba1a2e160" exitCode=0 Mar 19 17:30:04 crc kubenswrapper[4792]: I0319 17:30:04.322187 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565690-vvwvh" event={"ID":"dad7fdd8-b937-4a65-9541-524c61e7daf5","Type":"ContainerDied","Data":"925dcdb03e2e983282e52d6e027a0d4f3e58e7bc77abc2484484191ba1a2e160"} Mar 19 17:30:04 crc kubenswrapper[4792]: I0319 17:30:04.325983 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" Mar 19 17:30:04 crc kubenswrapper[4792]: I0319 17:30:04.326035 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm" event={"ID":"4368e7ae-b63b-42a2-87e0-455db4edac41","Type":"ContainerDied","Data":"e8a551ef6f139454d88c6da146a9aa0a624ccc08dd232b970403ce4164c1f478"} Mar 19 17:30:04 crc kubenswrapper[4792]: I0319 17:30:04.326061 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8a551ef6f139454d88c6da146a9aa0a624ccc08dd232b970403ce4164c1f478" Mar 19 17:30:04 crc kubenswrapper[4792]: I0319 17:30:04.791036 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt"] Mar 19 17:30:04 crc kubenswrapper[4792]: I0319 17:30:04.805634 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565645-bzhrt"] Mar 19 17:30:05 crc kubenswrapper[4792]: I0319 17:30:05.344399 4792 generic.go:334] "Generic (PLEG): container finished" podID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerID="54b1e9b43674906fd174b8d40a2a09df7ec0d40224e0f7fc6319c3c1f172ec87" exitCode=0 Mar 19 17:30:05 crc kubenswrapper[4792]: I0319 17:30:05.344721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qb" event={"ID":"abcbdaea-028e-40a0-9588-22aa8a06eeda","Type":"ContainerDied","Data":"54b1e9b43674906fd174b8d40a2a09df7ec0d40224e0f7fc6319c3c1f172ec87"} Mar 19 17:30:05 crc kubenswrapper[4792]: I0319 17:30:05.760072 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041d9c13-d181-48a0-bab9-efb2d845d365" path="/var/lib/kubelet/pods/041d9c13-d181-48a0-bab9-efb2d845d365/volumes" Mar 19 17:30:05 crc kubenswrapper[4792]: I0319 17:30:05.840804 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565690-vvwvh" Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.023738 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chdbq\" (UniqueName: \"kubernetes.io/projected/dad7fdd8-b937-4a65-9541-524c61e7daf5-kube-api-access-chdbq\") pod \"dad7fdd8-b937-4a65-9541-524c61e7daf5\" (UID: \"dad7fdd8-b937-4a65-9541-524c61e7daf5\") " Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.059258 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad7fdd8-b937-4a65-9541-524c61e7daf5-kube-api-access-chdbq" (OuterVolumeSpecName: "kube-api-access-chdbq") pod "dad7fdd8-b937-4a65-9541-524c61e7daf5" (UID: "dad7fdd8-b937-4a65-9541-524c61e7daf5"). InnerVolumeSpecName "kube-api-access-chdbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.131316 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chdbq\" (UniqueName: \"kubernetes.io/projected/dad7fdd8-b937-4a65-9541-524c61e7daf5-kube-api-access-chdbq\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.358351 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565690-vvwvh" event={"ID":"dad7fdd8-b937-4a65-9541-524c61e7daf5","Type":"ContainerDied","Data":"67eae7e06f9dfb095f521cd3293548f528d19dcd625f8f1d79c707a43aa720f8"} Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.358759 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67eae7e06f9dfb095f521cd3293548f528d19dcd625f8f1d79c707a43aa720f8" Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.358487 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565690-vvwvh" Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.363442 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qb" event={"ID":"abcbdaea-028e-40a0-9588-22aa8a06eeda","Type":"ContainerStarted","Data":"f8d4de5eac4df59ffcae1d192181ee72e0430a5f01b552161a35865934fae398"} Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.385776 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kz9qb" podStartSLOduration=2.674915269 podStartE2EDuration="7.385754049s" podCreationTimestamp="2026-03-19 17:29:59 +0000 UTC" firstStartedPulling="2026-03-19 17:30:01.275285069 +0000 UTC m=+2964.421342609" lastFinishedPulling="2026-03-19 17:30:05.986123849 +0000 UTC m=+2969.132181389" observedRunningTime="2026-03-19 17:30:06.381516583 +0000 UTC m=+2969.527574123" watchObservedRunningTime="2026-03-19 17:30:06.385754049 +0000 UTC m=+2969.531811589" Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.916682 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565684-4bhzd"] Mar 19 17:30:06 crc kubenswrapper[4792]: I0319 17:30:06.925789 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565684-4bhzd"] Mar 19 17:30:07 crc kubenswrapper[4792]: I0319 17:30:07.316729 4792 scope.go:117] "RemoveContainer" containerID="66405a81fd87ca24c460c282ef855c2b208731b75db2be3cf5d0f7c4b953da3f" Mar 19 17:30:07 crc kubenswrapper[4792]: I0319 17:30:07.753124 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7114ba-b847-4b00-b712-58f699f99544" path="/var/lib/kubelet/pods/ad7114ba-b847-4b00-b712-58f699f99544/volumes" Mar 19 17:30:09 crc kubenswrapper[4792]: I0319 17:30:09.832191 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:30:09 crc kubenswrapper[4792]: I0319 17:30:09.832512 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:30:09 crc kubenswrapper[4792]: I0319 17:30:09.880321 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.736897 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sc868"] Mar 19 17:30:13 crc kubenswrapper[4792]: E0319 17:30:13.738234 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4368e7ae-b63b-42a2-87e0-455db4edac41" containerName="collect-profiles" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.738255 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4368e7ae-b63b-42a2-87e0-455db4edac41" containerName="collect-profiles" Mar 19 17:30:13 crc kubenswrapper[4792]: E0319 17:30:13.738317 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad7fdd8-b937-4a65-9541-524c61e7daf5" containerName="oc" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.738326 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad7fdd8-b937-4a65-9541-524c61e7daf5" containerName="oc" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.738663 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad7fdd8-b937-4a65-9541-524c61e7daf5" containerName="oc" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.738687 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4368e7ae-b63b-42a2-87e0-455db4edac41" containerName="collect-profiles" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.744161 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.790356 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sc868"] Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.922405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-catalog-content\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.923583 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-utilities\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:13 crc kubenswrapper[4792]: I0319 17:30:13.923996 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvx5m\" (UniqueName: \"kubernetes.io/projected/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-kube-api-access-kvx5m\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:14 crc kubenswrapper[4792]: I0319 17:30:14.027145 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvx5m\" (UniqueName: \"kubernetes.io/projected/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-kube-api-access-kvx5m\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:14 crc kubenswrapper[4792]: I0319 17:30:14.027247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-catalog-content\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:14 crc kubenswrapper[4792]: I0319 17:30:14.027359 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-utilities\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:14 crc kubenswrapper[4792]: I0319 17:30:14.027878 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-catalog-content\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:14 crc kubenswrapper[4792]: I0319 17:30:14.028024 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-utilities\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:14 crc kubenswrapper[4792]: I0319 17:30:14.047879 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvx5m\" (UniqueName: \"kubernetes.io/projected/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-kube-api-access-kvx5m\") pod \"community-operators-sc868\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:14 crc kubenswrapper[4792]: I0319 17:30:14.080071 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:14 crc kubenswrapper[4792]: I0319 17:30:14.635103 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sc868"] Mar 19 17:30:14 crc kubenswrapper[4792]: W0319 17:30:14.640416 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ef9e16_91dc_4c62_9561_dd2a1cd08d86.slice/crio-e6dcf0a74c204c3e744c055c8c5d64e14f1ced5ece7a45172aeaeeba43ac49f9 WatchSource:0}: Error finding container e6dcf0a74c204c3e744c055c8c5d64e14f1ced5ece7a45172aeaeeba43ac49f9: Status 404 returned error can't find the container with id e6dcf0a74c204c3e744c055c8c5d64e14f1ced5ece7a45172aeaeeba43ac49f9 Mar 19 17:30:15 crc kubenswrapper[4792]: I0319 17:30:15.491364 4792 generic.go:334] "Generic (PLEG): container finished" podID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerID="1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07" exitCode=0 Mar 19 17:30:15 crc kubenswrapper[4792]: I0319 17:30:15.491462 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sc868" event={"ID":"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86","Type":"ContainerDied","Data":"1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07"} Mar 19 17:30:15 crc kubenswrapper[4792]: I0319 17:30:15.491700 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sc868" event={"ID":"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86","Type":"ContainerStarted","Data":"e6dcf0a74c204c3e744c055c8c5d64e14f1ced5ece7a45172aeaeeba43ac49f9"} Mar 19 17:30:16 crc kubenswrapper[4792]: I0319 17:30:16.504674 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sc868" event={"ID":"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86","Type":"ContainerStarted","Data":"2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc"} Mar 19 17:30:18 crc kubenswrapper[4792]: I0319 17:30:18.530439 4792 generic.go:334] "Generic (PLEG): container finished" podID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerID="2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc" exitCode=0 Mar 19 17:30:18 crc kubenswrapper[4792]: I0319 17:30:18.530530 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sc868" event={"ID":"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86","Type":"ContainerDied","Data":"2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc"} Mar 19 17:30:19 crc kubenswrapper[4792]: I0319 17:30:19.544860 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sc868" event={"ID":"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86","Type":"ContainerStarted","Data":"115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb"} Mar 19 17:30:19 crc kubenswrapper[4792]: I0319 17:30:19.563741 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sc868" podStartSLOduration=3.104901895 podStartE2EDuration="6.563724223s" podCreationTimestamp="2026-03-19 17:30:13 +0000 UTC" firstStartedPulling="2026-03-19 17:30:15.494588243 +0000 UTC m=+2978.640645783" lastFinishedPulling="2026-03-19 17:30:18.953410561 +0000 UTC m=+2982.099468111" observedRunningTime="2026-03-19 17:30:19.560776872 +0000 UTC m=+2982.706834412" watchObservedRunningTime="2026-03-19 17:30:19.563724223 +0000 UTC m=+2982.709781763" Mar 19 17:30:19 crc kubenswrapper[4792]: I0319 17:30:19.883555 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.232006 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.232151 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.232263 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.234378 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.234505 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" gracePeriod=600 Mar 19 17:30:20 crc kubenswrapper[4792]: E0319 17:30:20.360082 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.560372 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" exitCode=0 Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.560434 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629"} Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.561014 4792 scope.go:117] "RemoveContainer" containerID="c4660cba6f84428e58bbe76b84df53f5ea443faba3710398e6db4dd8c6f3ef06" Mar 19 17:30:20 crc kubenswrapper[4792]: I0319 17:30:20.561932 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:30:20 crc kubenswrapper[4792]: E0319 17:30:20.562472 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.128778 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz9qb"] Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.129363 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kz9qb" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerName="registry-server" containerID="cri-o://f8d4de5eac4df59ffcae1d192181ee72e0430a5f01b552161a35865934fae398" gracePeriod=2 Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.589509 4792 generic.go:334] "Generic (PLEG): container finished" podID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerID="f8d4de5eac4df59ffcae1d192181ee72e0430a5f01b552161a35865934fae398" exitCode=0 Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.589718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qb" event={"ID":"abcbdaea-028e-40a0-9588-22aa8a06eeda","Type":"ContainerDied","Data":"f8d4de5eac4df59ffcae1d192181ee72e0430a5f01b552161a35865934fae398"} Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.589963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qb" event={"ID":"abcbdaea-028e-40a0-9588-22aa8a06eeda","Type":"ContainerDied","Data":"54b3e5db3433b47fdc3cae997fb4e568a30fe659ad36d6cd44d12a10fcb56dd6"} Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.589979 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b3e5db3433b47fdc3cae997fb4e568a30fe659ad36d6cd44d12a10fcb56dd6" Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.646528 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.821503 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-utilities\") pod \"abcbdaea-028e-40a0-9588-22aa8a06eeda\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.822051 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-catalog-content\") pod \"abcbdaea-028e-40a0-9588-22aa8a06eeda\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.822162 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjnhv\" (UniqueName: \"kubernetes.io/projected/abcbdaea-028e-40a0-9588-22aa8a06eeda-kube-api-access-tjnhv\") pod \"abcbdaea-028e-40a0-9588-22aa8a06eeda\" (UID: \"abcbdaea-028e-40a0-9588-22aa8a06eeda\") " Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.822487 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-utilities" (OuterVolumeSpecName: "utilities") pod "abcbdaea-028e-40a0-9588-22aa8a06eeda" (UID: "abcbdaea-028e-40a0-9588-22aa8a06eeda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.823789 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.834677 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abcbdaea-028e-40a0-9588-22aa8a06eeda-kube-api-access-tjnhv" (OuterVolumeSpecName: "kube-api-access-tjnhv") pod "abcbdaea-028e-40a0-9588-22aa8a06eeda" (UID: "abcbdaea-028e-40a0-9588-22aa8a06eeda"). InnerVolumeSpecName "kube-api-access-tjnhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.879830 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abcbdaea-028e-40a0-9588-22aa8a06eeda" (UID: "abcbdaea-028e-40a0-9588-22aa8a06eeda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.926477 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abcbdaea-028e-40a0-9588-22aa8a06eeda-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:21 crc kubenswrapper[4792]: I0319 17:30:21.926509 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjnhv\" (UniqueName: \"kubernetes.io/projected/abcbdaea-028e-40a0-9588-22aa8a06eeda-kube-api-access-tjnhv\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:22 crc kubenswrapper[4792]: I0319 17:30:22.602034 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz9qb" Mar 19 17:30:22 crc kubenswrapper[4792]: I0319 17:30:22.639739 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz9qb"] Mar 19 17:30:22 crc kubenswrapper[4792]: I0319 17:30:22.652000 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kz9qb"] Mar 19 17:30:23 crc kubenswrapper[4792]: I0319 17:30:23.760923 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" path="/var/lib/kubelet/pods/abcbdaea-028e-40a0-9588-22aa8a06eeda/volumes" Mar 19 17:30:24 crc kubenswrapper[4792]: I0319 17:30:24.081252 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:24 crc kubenswrapper[4792]: I0319 17:30:24.081308 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:24 crc kubenswrapper[4792]: I0319 17:30:24.144759 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:24 crc kubenswrapper[4792]: I0319 17:30:24.690753 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:25 crc kubenswrapper[4792]: I0319 17:30:25.316926 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sc868"] Mar 19 17:30:26 crc kubenswrapper[4792]: I0319 17:30:26.657478 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sc868" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerName="registry-server" containerID="cri-o://115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb" gracePeriod=2 Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.174322 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.254863 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-utilities\") pod \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.255271 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-catalog-content\") pod \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.255331 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvx5m\" (UniqueName: \"kubernetes.io/projected/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-kube-api-access-kvx5m\") pod \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\" (UID: \"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86\") " Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.255740 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-utilities" (OuterVolumeSpecName: "utilities") pod "d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" (UID: "d6ef9e16-91dc-4c62-9561-dd2a1cd08d86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.256077 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.260882 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-kube-api-access-kvx5m" (OuterVolumeSpecName: "kube-api-access-kvx5m") pod "d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" (UID: "d6ef9e16-91dc-4c62-9561-dd2a1cd08d86"). InnerVolumeSpecName "kube-api-access-kvx5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.310169 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" (UID: "d6ef9e16-91dc-4c62-9561-dd2a1cd08d86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.357220 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.357267 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvx5m\" (UniqueName: \"kubernetes.io/projected/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86-kube-api-access-kvx5m\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.671950 4792 generic.go:334] "Generic (PLEG): container finished" podID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerID="115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb" exitCode=0 Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.672021 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sc868" event={"ID":"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86","Type":"ContainerDied","Data":"115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb"} Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.672283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sc868" event={"ID":"d6ef9e16-91dc-4c62-9561-dd2a1cd08d86","Type":"ContainerDied","Data":"e6dcf0a74c204c3e744c055c8c5d64e14f1ced5ece7a45172aeaeeba43ac49f9"} Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.672308 4792 scope.go:117] "RemoveContainer" containerID="115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.672056 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sc868" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.740104 4792 scope.go:117] "RemoveContainer" containerID="2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.766894 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sc868"] Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.781201 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sc868"] Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.789080 4792 scope.go:117] "RemoveContainer" containerID="1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.831369 4792 scope.go:117] "RemoveContainer" containerID="115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb" Mar 19 17:30:27 crc kubenswrapper[4792]: E0319 17:30:27.831817 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb\": container with ID starting with 115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb not found: ID does not exist" containerID="115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.831929 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb"} err="failed to get container status \"115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb\": rpc error: code = NotFound desc = could not find container \"115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb\": container with ID starting with 115a01f54cd06ed8efe9f99f7dcadc3458b0a543e229a51e1e0f809f0c3cf8eb not found: ID does not exist" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.831954 4792 scope.go:117] "RemoveContainer" containerID="2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc" Mar 19 17:30:27 crc kubenswrapper[4792]: E0319 17:30:27.832499 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc\": container with ID starting with 2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc not found: ID does not exist" containerID="2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.832650 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc"} err="failed to get container status \"2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc\": rpc error: code = NotFound desc = could not find container \"2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc\": container with ID starting with 2b65b56b417bcadb12015c5302db9faa7b79f5d26e82881f4bf4fcac5d5bbbcc not found: ID does not exist" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.832740 4792 scope.go:117] "RemoveContainer" containerID="1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07" Mar 19 17:30:27 crc kubenswrapper[4792]: E0319 17:30:27.833738 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07\": container with ID starting with 1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07 not found: ID does not exist" containerID="1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07" Mar 19 17:30:27 crc kubenswrapper[4792]: I0319 17:30:27.833829 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07"} err="failed to get container status \"1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07\": rpc error: code = NotFound desc = could not find container \"1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07\": container with ID starting with 1a0f7433895153b7dc47bdb430c0eb03caf4a27e8adb53bcf76b9eb9b0169a07 not found: ID does not exist" Mar 19 17:30:29 crc kubenswrapper[4792]: I0319 17:30:29.753512 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" path="/var/lib/kubelet/pods/d6ef9e16-91dc-4c62-9561-dd2a1cd08d86/volumes" Mar 19 17:30:34 crc kubenswrapper[4792]: I0319 17:30:34.740907 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:30:34 crc kubenswrapper[4792]: E0319 17:30:34.741656 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:30:36 crc kubenswrapper[4792]: I0319 17:30:36.779293 4792 generic.go:334] "Generic (PLEG): container finished" podID="93d49310-d5d8-4e87-9162-296093e9adc5" containerID="ddce76d68160a018cc35893ea1de5271e0912e3bf2d27334f6fafbd4d64e8e2c" exitCode=0 Mar 19 17:30:36 crc kubenswrapper[4792]: I0319 17:30:36.779396 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" event={"ID":"93d49310-d5d8-4e87-9162-296093e9adc5","Type":"ContainerDied","Data":"ddce76d68160a018cc35893ea1de5271e0912e3bf2d27334f6fafbd4d64e8e2c"} Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.330167 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.426612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93d49310-d5d8-4e87-9162-296093e9adc5-nova-extra-config-0\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.426698 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-2\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.426833 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2944\" (UniqueName: \"kubernetes.io/projected/93d49310-d5d8-4e87-9162-296093e9adc5-kube-api-access-r2944\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.426878 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-0\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.426898 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-1\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.426916 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-ssh-key-openstack-edpm-ipam\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.426938 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-3\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.426965 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-1\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.427099 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-0\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.427115 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-combined-ca-bundle\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.427145 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-inventory\") pod \"93d49310-d5d8-4e87-9162-296093e9adc5\" (UID: \"93d49310-d5d8-4e87-9162-296093e9adc5\") " Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.449139 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d49310-d5d8-4e87-9162-296093e9adc5-kube-api-access-r2944" (OuterVolumeSpecName: "kube-api-access-r2944") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "kube-api-access-r2944". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.465455 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.479335 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-inventory" (OuterVolumeSpecName: "inventory") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.487599 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d49310-d5d8-4e87-9162-296093e9adc5-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.490297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.498260 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.506059 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.509780 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.509828 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.514397 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.516096 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "93d49310-d5d8-4e87-9162-296093e9adc5" (UID: "93d49310-d5d8-4e87-9162-296093e9adc5"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529444 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2944\" (UniqueName: \"kubernetes.io/projected/93d49310-d5d8-4e87-9162-296093e9adc5-kube-api-access-r2944\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529483 4792 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529498 4792 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529512 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529522 4792 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529531 4792 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529540 4792 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529548 4792 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529558 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529568 4792 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93d49310-d5d8-4e87-9162-296093e9adc5-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.529576 4792 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/93d49310-d5d8-4e87-9162-296093e9adc5-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.807301 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" event={"ID":"93d49310-d5d8-4e87-9162-296093e9adc5","Type":"ContainerDied","Data":"956852f75a36c32ead60980f4658e3e070ac49d57673c5007fe7ab47f952cdad"} Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.807343 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956852f75a36c32ead60980f4658e3e070ac49d57673c5007fe7ab47f952cdad" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.807347 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xcfck" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.926369 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v"] Mar 19 17:30:38 crc kubenswrapper[4792]: E0319 17:30:38.927095 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerName="registry-server" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927113 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerName="registry-server" Mar 19 17:30:38 crc kubenswrapper[4792]: E0319 17:30:38.927124 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerName="extract-utilities" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927131 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerName="extract-utilities" Mar 19 17:30:38 crc kubenswrapper[4792]: E0319 17:30:38.927156 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d49310-d5d8-4e87-9162-296093e9adc5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927162 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d49310-d5d8-4e87-9162-296093e9adc5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 19 17:30:38 crc kubenswrapper[4792]: E0319 17:30:38.927173 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerName="extract-content" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927179 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerName="extract-content" Mar 19 17:30:38 crc kubenswrapper[4792]: E0319 17:30:38.927192 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerName="registry-server" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927198 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerName="registry-server" Mar 19 17:30:38 crc kubenswrapper[4792]: E0319 17:30:38.927210 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerName="extract-utilities" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927216 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerName="extract-utilities" Mar 19 17:30:38 crc kubenswrapper[4792]: E0319 17:30:38.927231 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerName="extract-content" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927236 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerName="extract-content" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927480 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d49310-d5d8-4e87-9162-296093e9adc5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927495 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcbdaea-028e-40a0-9588-22aa8a06eeda" containerName="registry-server" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.927502 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ef9e16-91dc-4c62-9561-dd2a1cd08d86" containerName="registry-server" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.928284 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.931137 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.931538 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.933050 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.933607 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.935373 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:30:38 crc kubenswrapper[4792]: I0319 17:30:38.939605 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v"] Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.056794 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.057012 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.057232 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.057278 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.057380 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkwq\" (UniqueName: \"kubernetes.io/projected/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-kube-api-access-hlkwq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.057416 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.057441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.160045 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlkwq\" (UniqueName: \"kubernetes.io/projected/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-kube-api-access-hlkwq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.160155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.160210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.160264 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.160355 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.160584 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.160650 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.168259 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.168384 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.168935 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.169059 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.169166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.179553 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.180503 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlkwq\" (UniqueName: \"kubernetes.io/projected/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-kube-api-access-hlkwq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.250105 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:30:39 crc kubenswrapper[4792]: I0319 17:30:39.824510 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v"] Mar 19 17:30:40 crc kubenswrapper[4792]: I0319 17:30:40.832939 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" event={"ID":"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89","Type":"ContainerStarted","Data":"c86f9792ab4e2457d088eb63e441029843352a0140e8f878dc117255042696ba"} Mar 19 17:30:41 crc kubenswrapper[4792]: I0319 17:30:41.844823 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" event={"ID":"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89","Type":"ContainerStarted","Data":"e4a16de343510c9badb8f23e621e0a3ed92aa15da4bf464e9c1816556caa90c5"} Mar 19 17:30:41 crc kubenswrapper[4792]: I0319 17:30:41.878664 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" podStartSLOduration=3.097967378 podStartE2EDuration="3.87864401s" podCreationTimestamp="2026-03-19 17:30:38 +0000 UTC" firstStartedPulling="2026-03-19 17:30:39.847464827 +0000 UTC m=+3002.993522367" lastFinishedPulling="2026-03-19 17:30:40.628141459 +0000 UTC m=+3003.774198999" observedRunningTime="2026-03-19 17:30:41.863089395 +0000 UTC m=+3005.009146945" watchObservedRunningTime="2026-03-19 17:30:41.87864401 +0000 UTC m=+3005.024701570" Mar 19 17:30:45 crc kubenswrapper[4792]: I0319 17:30:45.741573 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:30:45 crc kubenswrapper[4792]: E0319 17:30:45.742363 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:31:00 crc kubenswrapper[4792]: I0319 17:31:00.740728 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:31:00 crc kubenswrapper[4792]: E0319 17:31:00.741702 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:31:07 crc kubenswrapper[4792]: I0319 17:31:07.402043 4792 scope.go:117] "RemoveContainer" containerID="0cd1c5cabe3711b55d5841a4616b4947c528535190728dc10709624c481de5a9" Mar 19 17:31:13 crc kubenswrapper[4792]: I0319 17:31:13.740392 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:31:13 crc kubenswrapper[4792]: E0319 17:31:13.741330 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:31:26 crc kubenswrapper[4792]: I0319 17:31:26.739817 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:31:26 crc kubenswrapper[4792]: E0319 17:31:26.740808 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:31:39 crc kubenswrapper[4792]: I0319 17:31:39.740480 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:31:39 crc kubenswrapper[4792]: E0319 17:31:39.742059 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:31:52 crc kubenswrapper[4792]: I0319 17:31:52.739780 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:31:52 crc kubenswrapper[4792]: E0319 17:31:52.740512 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.153342 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565692-nxv82"] Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.155972 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565692-nxv82" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.160481 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.160950 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.161121 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.168414 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565692-nxv82"] Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.214248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9kbt\" (UniqueName: \"kubernetes.io/projected/870f81ec-aa3a-4385-84b2-1133132fbfd7-kube-api-access-d9kbt\") pod \"auto-csr-approver-29565692-nxv82\" (UID: \"870f81ec-aa3a-4385-84b2-1133132fbfd7\") " pod="openshift-infra/auto-csr-approver-29565692-nxv82" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.316261 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9kbt\" (UniqueName: \"kubernetes.io/projected/870f81ec-aa3a-4385-84b2-1133132fbfd7-kube-api-access-d9kbt\") pod \"auto-csr-approver-29565692-nxv82\" (UID: \"870f81ec-aa3a-4385-84b2-1133132fbfd7\") " pod="openshift-infra/auto-csr-approver-29565692-nxv82" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.335100 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9kbt\" (UniqueName: \"kubernetes.io/projected/870f81ec-aa3a-4385-84b2-1133132fbfd7-kube-api-access-d9kbt\") pod \"auto-csr-approver-29565692-nxv82\" (UID: \"870f81ec-aa3a-4385-84b2-1133132fbfd7\") " pod="openshift-infra/auto-csr-approver-29565692-nxv82" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.476771 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565692-nxv82" Mar 19 17:32:00 crc kubenswrapper[4792]: I0319 17:32:00.924218 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565692-nxv82"] Mar 19 17:32:01 crc kubenswrapper[4792]: I0319 17:32:01.776685 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565692-nxv82" event={"ID":"870f81ec-aa3a-4385-84b2-1133132fbfd7","Type":"ContainerStarted","Data":"d85928ca8e9fb09258338a176fdc908b64915d330ed743b86c9f9540d613b564"} Mar 19 17:32:02 crc kubenswrapper[4792]: I0319 17:32:02.806160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565692-nxv82" event={"ID":"870f81ec-aa3a-4385-84b2-1133132fbfd7","Type":"ContainerStarted","Data":"4b5b6f7c72d90b69f628af78ef46b86925971cbf88c3ec8975289be10e6a6bf4"} Mar 19 17:32:02 crc kubenswrapper[4792]: I0319 17:32:02.828943 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565692-nxv82" podStartSLOduration=1.498857371 podStartE2EDuration="2.828921248s" podCreationTimestamp="2026-03-19 17:32:00 +0000 UTC" firstStartedPulling="2026-03-19 17:32:00.936764297 +0000 UTC m=+3084.082821837" lastFinishedPulling="2026-03-19 17:32:02.266828164 +0000 UTC m=+3085.412885714" observedRunningTime="2026-03-19 17:32:02.818393919 +0000 UTC m=+3085.964451459" watchObservedRunningTime="2026-03-19 17:32:02.828921248 +0000 UTC m=+3085.974978788" Mar 19 17:32:03 crc kubenswrapper[4792]: I0319 17:32:03.740035 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:32:03 crc kubenswrapper[4792]: E0319 17:32:03.740702 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:32:03 crc kubenswrapper[4792]: I0319 17:32:03.818395 4792 generic.go:334] "Generic (PLEG): container finished" podID="870f81ec-aa3a-4385-84b2-1133132fbfd7" containerID="4b5b6f7c72d90b69f628af78ef46b86925971cbf88c3ec8975289be10e6a6bf4" exitCode=0 Mar 19 17:32:03 crc kubenswrapper[4792]: I0319 17:32:03.818440 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565692-nxv82" event={"ID":"870f81ec-aa3a-4385-84b2-1133132fbfd7","Type":"ContainerDied","Data":"4b5b6f7c72d90b69f628af78ef46b86925971cbf88c3ec8975289be10e6a6bf4"} Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.208922 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565692-nxv82" Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.339817 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9kbt\" (UniqueName: \"kubernetes.io/projected/870f81ec-aa3a-4385-84b2-1133132fbfd7-kube-api-access-d9kbt\") pod \"870f81ec-aa3a-4385-84b2-1133132fbfd7\" (UID: \"870f81ec-aa3a-4385-84b2-1133132fbfd7\") " Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.349114 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870f81ec-aa3a-4385-84b2-1133132fbfd7-kube-api-access-d9kbt" (OuterVolumeSpecName: "kube-api-access-d9kbt") pod "870f81ec-aa3a-4385-84b2-1133132fbfd7" (UID: "870f81ec-aa3a-4385-84b2-1133132fbfd7"). InnerVolumeSpecName "kube-api-access-d9kbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.443749 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9kbt\" (UniqueName: \"kubernetes.io/projected/870f81ec-aa3a-4385-84b2-1133132fbfd7-kube-api-access-d9kbt\") on node \"crc\" DevicePath \"\"" Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.852011 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565692-nxv82" event={"ID":"870f81ec-aa3a-4385-84b2-1133132fbfd7","Type":"ContainerDied","Data":"d85928ca8e9fb09258338a176fdc908b64915d330ed743b86c9f9540d613b564"} Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.852116 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85928ca8e9fb09258338a176fdc908b64915d330ed743b86c9f9540d613b564" Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.852234 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565692-nxv82" Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.927004 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565686-2mdj7"] Mar 19 17:32:05 crc kubenswrapper[4792]: I0319 17:32:05.938780 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565686-2mdj7"] Mar 19 17:32:07 crc kubenswrapper[4792]: I0319 17:32:07.754773 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96744e0b-5d22-4da8-b394-4f566d114a8b" path="/var/lib/kubelet/pods/96744e0b-5d22-4da8-b394-4f566d114a8b/volumes" Mar 19 17:32:14 crc kubenswrapper[4792]: I0319 17:32:14.740087 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:32:14 crc kubenswrapper[4792]: E0319 17:32:14.742292 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:32:27 crc kubenswrapper[4792]: I0319 17:32:27.746662 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:32:27 crc kubenswrapper[4792]: E0319 17:32:27.747717 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:32:39 crc kubenswrapper[4792]: I0319 17:32:39.740348 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:32:39 crc kubenswrapper[4792]: E0319 17:32:39.741217 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:32:54 crc kubenswrapper[4792]: I0319 17:32:54.740073 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:32:54 crc kubenswrapper[4792]: E0319 17:32:54.740827 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:32:58 crc kubenswrapper[4792]: I0319 17:32:58.457469 4792 generic.go:334] "Generic (PLEG): container finished" podID="1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" containerID="e4a16de343510c9badb8f23e621e0a3ed92aa15da4bf464e9c1816556caa90c5" exitCode=0 Mar 19 17:32:58 crc kubenswrapper[4792]: I0319 17:32:58.457648 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" event={"ID":"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89","Type":"ContainerDied","Data":"e4a16de343510c9badb8f23e621e0a3ed92aa15da4bf464e9c1816556caa90c5"} Mar 19 17:32:59 crc kubenswrapper[4792]: I0319 17:32:59.938005 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:32:59 crc kubenswrapper[4792]: I0319 17:32:59.997526 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-1\") pod \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " Mar 19 17:32:59 crc kubenswrapper[4792]: I0319 17:32:59.997673 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-inventory\") pod \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " Mar 19 17:32:59 crc kubenswrapper[4792]: I0319 17:32:59.997769 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-0\") pod \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " Mar 19 17:32:59 crc kubenswrapper[4792]: I0319 17:32:59.997957 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-2\") pod \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " Mar 19 17:32:59 crc kubenswrapper[4792]: I0319 17:32:59.998076 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlkwq\" (UniqueName: \"kubernetes.io/projected/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-kube-api-access-hlkwq\") pod \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " Mar 19 17:32:59 crc kubenswrapper[4792]: I0319 17:32:59.998159 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ssh-key-openstack-edpm-ipam\") pod \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " Mar 19 17:32:59 crc kubenswrapper[4792]: I0319 17:32:59.998179 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-telemetry-combined-ca-bundle\") pod \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\" (UID: \"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89\") " Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.005528 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" (UID: "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.007789 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-kube-api-access-hlkwq" (OuterVolumeSpecName: "kube-api-access-hlkwq") pod "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" (UID: "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89"). InnerVolumeSpecName "kube-api-access-hlkwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.032513 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" (UID: "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.032600 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" (UID: "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.044883 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" (UID: "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.047748 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" (UID: "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.048092 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-inventory" (OuterVolumeSpecName: "inventory") pod "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" (UID: "1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.101161 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.101376 4792 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.101436 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.101517 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.101577 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.101640 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.101702 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlkwq\" (UniqueName: \"kubernetes.io/projected/1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89-kube-api-access-hlkwq\") on node \"crc\" DevicePath \"\"" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.484218 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" event={"ID":"1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89","Type":"ContainerDied","Data":"c86f9792ab4e2457d088eb63e441029843352a0140e8f878dc117255042696ba"} Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.484269 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c86f9792ab4e2457d088eb63e441029843352a0140e8f878dc117255042696ba" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.484284 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.590795 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm"] Mar 19 17:33:00 crc kubenswrapper[4792]: E0319 17:33:00.591532 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870f81ec-aa3a-4385-84b2-1133132fbfd7" containerName="oc" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.591555 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="870f81ec-aa3a-4385-84b2-1133132fbfd7" containerName="oc" Mar 19 17:33:00 crc kubenswrapper[4792]: E0319 17:33:00.591584 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.591595 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.591903 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.591945 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="870f81ec-aa3a-4385-84b2-1133132fbfd7" containerName="oc" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.593165 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.597524 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.597525 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.597633 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.597736 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.597744 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.610111 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm"] Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.731945 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8njd\" (UniqueName: \"kubernetes.io/projected/908382e0-1083-4b73-94f3-8be945974902-kube-api-access-z8njd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.732353 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.732527 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.732605 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.732757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.733066 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.733173 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.835412 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8njd\" (UniqueName: \"kubernetes.io/projected/908382e0-1083-4b73-94f3-8be945974902-kube-api-access-z8njd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.836937 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.837639 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.837728 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.837778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.838556 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.838651 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.842739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.843084 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.843161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.843339 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.843494 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.844736 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.864062 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8njd\" (UniqueName: \"kubernetes.io/projected/908382e0-1083-4b73-94f3-8be945974902-kube-api-access-z8njd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:00 crc kubenswrapper[4792]: I0319 17:33:00.928579 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:33:01 crc kubenswrapper[4792]: I0319 17:33:01.515386 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:33:01 crc kubenswrapper[4792]: I0319 17:33:01.519959 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm"] Mar 19 17:33:02 crc kubenswrapper[4792]: I0319 17:33:02.505592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" event={"ID":"908382e0-1083-4b73-94f3-8be945974902","Type":"ContainerStarted","Data":"b33e61e8e53d233c08269f418198d7b95253b70e583727a94bdcc81842474970"} Mar 19 17:33:02 crc kubenswrapper[4792]: I0319 17:33:02.506377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" event={"ID":"908382e0-1083-4b73-94f3-8be945974902","Type":"ContainerStarted","Data":"31dae90f10a6d222dfab179b6074f5321c121ca883055b23edcc1c75c18bf6b2"} Mar 19 17:33:02 crc kubenswrapper[4792]: I0319 17:33:02.535319 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" podStartSLOduration=1.9170379020000001 podStartE2EDuration="2.535296842s" podCreationTimestamp="2026-03-19 17:33:00 +0000 UTC" firstStartedPulling="2026-03-19 17:33:01.515182511 +0000 UTC m=+3144.661240051" lastFinishedPulling="2026-03-19 17:33:02.133441451 +0000 UTC m=+3145.279498991" observedRunningTime="2026-03-19 17:33:02.524419254 +0000 UTC m=+3145.670476834" watchObservedRunningTime="2026-03-19 17:33:02.535296842 +0000 UTC m=+3145.681354392" Mar 19 17:33:05 crc kubenswrapper[4792]: I0319 17:33:05.740373 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:33:05 crc kubenswrapper[4792]: E0319 17:33:05.741186 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:33:07 crc kubenswrapper[4792]: I0319 17:33:07.512238 4792 scope.go:117] "RemoveContainer" containerID="dd63e9e05afc59120cc8757dea8508d61eef9670a819f9eed7c63a13c41b0cd2" Mar 19 17:33:17 crc kubenswrapper[4792]: I0319 17:33:17.756355 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:33:17 crc kubenswrapper[4792]: E0319 17:33:17.757687 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:33:31 crc kubenswrapper[4792]: I0319 17:33:31.740525 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:33:31 crc kubenswrapper[4792]: E0319 17:33:31.741754 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:33:42 crc kubenswrapper[4792]: I0319 17:33:42.740809 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:33:42 crc kubenswrapper[4792]: E0319 17:33:42.741818 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:33:53 crc kubenswrapper[4792]: I0319 17:33:53.740387 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:33:53 crc kubenswrapper[4792]: E0319 17:33:53.741257 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.174370 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565694-47zcj"] Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.177317 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565694-47zcj" Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.181069 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.182031 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.184477 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.193668 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565694-47zcj"] Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.318211 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjxw\" (UniqueName: \"kubernetes.io/projected/ea1584de-582a-43d3-82bb-095ff16157a6-kube-api-access-qkjxw\") pod \"auto-csr-approver-29565694-47zcj\" (UID: \"ea1584de-582a-43d3-82bb-095ff16157a6\") " pod="openshift-infra/auto-csr-approver-29565694-47zcj" Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.420333 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkjxw\" (UniqueName: \"kubernetes.io/projected/ea1584de-582a-43d3-82bb-095ff16157a6-kube-api-access-qkjxw\") pod \"auto-csr-approver-29565694-47zcj\" (UID: \"ea1584de-582a-43d3-82bb-095ff16157a6\") " pod="openshift-infra/auto-csr-approver-29565694-47zcj" Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.439949 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkjxw\" (UniqueName: \"kubernetes.io/projected/ea1584de-582a-43d3-82bb-095ff16157a6-kube-api-access-qkjxw\") pod \"auto-csr-approver-29565694-47zcj\" (UID: \"ea1584de-582a-43d3-82bb-095ff16157a6\") " pod="openshift-infra/auto-csr-approver-29565694-47zcj" Mar 19 17:34:00 crc kubenswrapper[4792]: I0319 17:34:00.505393 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565694-47zcj" Mar 19 17:34:02 crc kubenswrapper[4792]: I0319 17:34:02.242989 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565694-47zcj"] Mar 19 17:34:03 crc kubenswrapper[4792]: I0319 17:34:03.241374 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565694-47zcj" event={"ID":"ea1584de-582a-43d3-82bb-095ff16157a6","Type":"ContainerStarted","Data":"c8bc9f909896bdbd6ceb4ffad3d9e8bc41161c42e3faa69033160021ff0f1acf"} Mar 19 17:34:04 crc kubenswrapper[4792]: I0319 17:34:04.256575 4792 generic.go:334] "Generic (PLEG): container finished" podID="ea1584de-582a-43d3-82bb-095ff16157a6" containerID="2e480ee562bb80b7dcd2cdf6d26cece3b778d087a351572d77c74325302cbd2d" exitCode=0 Mar 19 17:34:04 crc kubenswrapper[4792]: I0319 17:34:04.256658 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565694-47zcj" event={"ID":"ea1584de-582a-43d3-82bb-095ff16157a6","Type":"ContainerDied","Data":"2e480ee562bb80b7dcd2cdf6d26cece3b778d087a351572d77c74325302cbd2d"} Mar 19 17:34:05 crc kubenswrapper[4792]: I0319 17:34:05.738380 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565694-47zcj" Mar 19 17:34:05 crc kubenswrapper[4792]: I0319 17:34:05.865153 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkjxw\" (UniqueName: \"kubernetes.io/projected/ea1584de-582a-43d3-82bb-095ff16157a6-kube-api-access-qkjxw\") pod \"ea1584de-582a-43d3-82bb-095ff16157a6\" (UID: \"ea1584de-582a-43d3-82bb-095ff16157a6\") " Mar 19 17:34:05 crc kubenswrapper[4792]: I0319 17:34:05.871481 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1584de-582a-43d3-82bb-095ff16157a6-kube-api-access-qkjxw" (OuterVolumeSpecName: "kube-api-access-qkjxw") pod "ea1584de-582a-43d3-82bb-095ff16157a6" (UID: "ea1584de-582a-43d3-82bb-095ff16157a6"). InnerVolumeSpecName "kube-api-access-qkjxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:34:05 crc kubenswrapper[4792]: I0319 17:34:05.969166 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkjxw\" (UniqueName: \"kubernetes.io/projected/ea1584de-582a-43d3-82bb-095ff16157a6-kube-api-access-qkjxw\") on node \"crc\" DevicePath \"\"" Mar 19 17:34:06 crc kubenswrapper[4792]: I0319 17:34:06.281492 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565694-47zcj" event={"ID":"ea1584de-582a-43d3-82bb-095ff16157a6","Type":"ContainerDied","Data":"c8bc9f909896bdbd6ceb4ffad3d9e8bc41161c42e3faa69033160021ff0f1acf"} Mar 19 17:34:06 crc kubenswrapper[4792]: I0319 17:34:06.281554 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8bc9f909896bdbd6ceb4ffad3d9e8bc41161c42e3faa69033160021ff0f1acf" Mar 19 17:34:06 crc kubenswrapper[4792]: I0319 17:34:06.281558 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565694-47zcj" Mar 19 17:34:06 crc kubenswrapper[4792]: I0319 17:34:06.740529 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:34:06 crc kubenswrapper[4792]: E0319 17:34:06.741362 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:34:06 crc kubenswrapper[4792]: I0319 17:34:06.822415 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565688-vdbql"] Mar 19 17:34:06 crc kubenswrapper[4792]: I0319 17:34:06.837420 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565688-vdbql"] Mar 19 17:34:07 crc kubenswrapper[4792]: I0319 17:34:07.772228 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c7b7252-20fe-4736-bbc6-39369d762541" path="/var/lib/kubelet/pods/2c7b7252-20fe-4736-bbc6-39369d762541/volumes" Mar 19 17:34:17 crc kubenswrapper[4792]: I0319 17:34:17.747161 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:34:17 crc kubenswrapper[4792]: E0319 17:34:17.748175 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:34:31 crc kubenswrapper[4792]: I0319 17:34:31.742083 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:34:31 crc kubenswrapper[4792]: E0319 17:34:31.742970 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:34:43 crc kubenswrapper[4792]: I0319 17:34:43.739668 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:34:43 crc kubenswrapper[4792]: E0319 17:34:43.740556 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:34:52 crc kubenswrapper[4792]: I0319 17:34:52.820524 4792 generic.go:334] "Generic (PLEG): container finished" podID="908382e0-1083-4b73-94f3-8be945974902" containerID="b33e61e8e53d233c08269f418198d7b95253b70e583727a94bdcc81842474970" exitCode=0 Mar 19 17:34:52 crc kubenswrapper[4792]: I0319 17:34:52.820704 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" event={"ID":"908382e0-1083-4b73-94f3-8be945974902","Type":"ContainerDied","Data":"b33e61e8e53d233c08269f418198d7b95253b70e583727a94bdcc81842474970"} Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.376808 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.493285 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8njd\" (UniqueName: \"kubernetes.io/projected/908382e0-1083-4b73-94f3-8be945974902-kube-api-access-z8njd\") pod \"908382e0-1083-4b73-94f3-8be945974902\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.493653 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-0\") pod \"908382e0-1083-4b73-94f3-8be945974902\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.493741 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-1\") pod \"908382e0-1083-4b73-94f3-8be945974902\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.493854 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-telemetry-power-monitoring-combined-ca-bundle\") pod \"908382e0-1083-4b73-94f3-8be945974902\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.493981 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-2\") pod \"908382e0-1083-4b73-94f3-8be945974902\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.494022 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ssh-key-openstack-edpm-ipam\") pod \"908382e0-1083-4b73-94f3-8be945974902\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.494079 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-inventory\") pod \"908382e0-1083-4b73-94f3-8be945974902\" (UID: \"908382e0-1083-4b73-94f3-8be945974902\") " Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.502954 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "908382e0-1083-4b73-94f3-8be945974902" (UID: "908382e0-1083-4b73-94f3-8be945974902"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.503637 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908382e0-1083-4b73-94f3-8be945974902-kube-api-access-z8njd" (OuterVolumeSpecName: "kube-api-access-z8njd") pod "908382e0-1083-4b73-94f3-8be945974902" (UID: "908382e0-1083-4b73-94f3-8be945974902"). InnerVolumeSpecName "kube-api-access-z8njd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.532970 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "908382e0-1083-4b73-94f3-8be945974902" (UID: "908382e0-1083-4b73-94f3-8be945974902"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.533607 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "908382e0-1083-4b73-94f3-8be945974902" (UID: "908382e0-1083-4b73-94f3-8be945974902"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.537902 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "908382e0-1083-4b73-94f3-8be945974902" (UID: "908382e0-1083-4b73-94f3-8be945974902"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.544904 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "908382e0-1083-4b73-94f3-8be945974902" (UID: "908382e0-1083-4b73-94f3-8be945974902"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.559521 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-inventory" (OuterVolumeSpecName: "inventory") pod "908382e0-1083-4b73-94f3-8be945974902" (UID: "908382e0-1083-4b73-94f3-8be945974902"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.596976 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8njd\" (UniqueName: \"kubernetes.io/projected/908382e0-1083-4b73-94f3-8be945974902-kube-api-access-z8njd\") on node \"crc\" DevicePath \"\"" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.597014 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.597031 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.597045 4792 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.597062 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.597076 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.597090 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908382e0-1083-4b73-94f3-8be945974902-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.849566 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" event={"ID":"908382e0-1083-4b73-94f3-8be945974902","Type":"ContainerDied","Data":"31dae90f10a6d222dfab179b6074f5321c121ca883055b23edcc1c75c18bf6b2"} Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.849621 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm" Mar 19 17:34:54 crc kubenswrapper[4792]: I0319 17:34:54.849624 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31dae90f10a6d222dfab179b6074f5321c121ca883055b23edcc1c75c18bf6b2" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.064786 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9"] Mar 19 17:34:55 crc kubenswrapper[4792]: E0319 17:34:55.065297 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908382e0-1083-4b73-94f3-8be945974902" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.065320 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="908382e0-1083-4b73-94f3-8be945974902" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 19 17:34:55 crc kubenswrapper[4792]: E0319 17:34:55.065332 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1584de-582a-43d3-82bb-095ff16157a6" containerName="oc" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.065340 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1584de-582a-43d3-82bb-095ff16157a6" containerName="oc" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.065586 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1584de-582a-43d3-82bb-095ff16157a6" containerName="oc" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.065603 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="908382e0-1083-4b73-94f3-8be945974902" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.066410 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.074699 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.074947 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.075109 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.075634 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-968jx" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.076505 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.080723 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9"] Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.211388 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.211658 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.211757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.211978 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.212001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4746d\" (UniqueName: \"kubernetes.io/projected/2f6f7544-7d00-409e-baf3-688917113063-kube-api-access-4746d\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.315355 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.315514 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.315565 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.315679 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.315706 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4746d\" (UniqueName: \"kubernetes.io/projected/2f6f7544-7d00-409e-baf3-688917113063-kube-api-access-4746d\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.319938 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.320051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.320094 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.323381 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.339412 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4746d\" (UniqueName: \"kubernetes.io/projected/2f6f7544-7d00-409e-baf3-688917113063-kube-api-access-4746d\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8tbh9\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.387159 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:34:55 crc kubenswrapper[4792]: I0319 17:34:55.977626 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9"] Mar 19 17:34:56 crc kubenswrapper[4792]: I0319 17:34:56.882585 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" event={"ID":"2f6f7544-7d00-409e-baf3-688917113063","Type":"ContainerStarted","Data":"c2eed779040522785ab2152a15b91a370c49c86dae86b00a147f6e4232e2ffc8"} Mar 19 17:34:56 crc kubenswrapper[4792]: I0319 17:34:56.883145 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" event={"ID":"2f6f7544-7d00-409e-baf3-688917113063","Type":"ContainerStarted","Data":"9a91c3fd915c5bf231595fb2626b1634bdf2cc6491bfe438b8a87012183c06df"} Mar 19 17:34:57 crc kubenswrapper[4792]: I0319 17:34:57.748040 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:34:57 crc kubenswrapper[4792]: E0319 17:34:57.748606 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:35:07 crc kubenswrapper[4792]: I0319 17:35:07.631269 4792 scope.go:117] "RemoveContainer" containerID="27ce0575c7c8b031b47b1c55d05f02e73e537d6f78fe344890050adde0239517" Mar 19 17:35:10 crc kubenswrapper[4792]: I0319 17:35:10.740533 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:35:10 crc kubenswrapper[4792]: E0319 17:35:10.741443 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:35:11 crc kubenswrapper[4792]: I0319 17:35:11.074686 4792 generic.go:334] "Generic (PLEG): container finished" podID="2f6f7544-7d00-409e-baf3-688917113063" containerID="c2eed779040522785ab2152a15b91a370c49c86dae86b00a147f6e4232e2ffc8" exitCode=0 Mar 19 17:35:11 crc kubenswrapper[4792]: I0319 17:35:11.074740 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" event={"ID":"2f6f7544-7d00-409e-baf3-688917113063","Type":"ContainerDied","Data":"c2eed779040522785ab2152a15b91a370c49c86dae86b00a147f6e4232e2ffc8"} Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.620694 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.672511 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-0\") pod \"2f6f7544-7d00-409e-baf3-688917113063\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.672574 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4746d\" (UniqueName: \"kubernetes.io/projected/2f6f7544-7d00-409e-baf3-688917113063-kube-api-access-4746d\") pod \"2f6f7544-7d00-409e-baf3-688917113063\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.672684 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-1\") pod \"2f6f7544-7d00-409e-baf3-688917113063\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.672817 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-inventory\") pod \"2f6f7544-7d00-409e-baf3-688917113063\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.673096 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-ssh-key-openstack-edpm-ipam\") pod \"2f6f7544-7d00-409e-baf3-688917113063\" (UID: \"2f6f7544-7d00-409e-baf3-688917113063\") " Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.680569 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6f7544-7d00-409e-baf3-688917113063-kube-api-access-4746d" (OuterVolumeSpecName: "kube-api-access-4746d") pod "2f6f7544-7d00-409e-baf3-688917113063" (UID: "2f6f7544-7d00-409e-baf3-688917113063"). InnerVolumeSpecName "kube-api-access-4746d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.708923 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-inventory" (OuterVolumeSpecName: "inventory") pod "2f6f7544-7d00-409e-baf3-688917113063" (UID: "2f6f7544-7d00-409e-baf3-688917113063"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.717376 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "2f6f7544-7d00-409e-baf3-688917113063" (UID: "2f6f7544-7d00-409e-baf3-688917113063"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.721610 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "2f6f7544-7d00-409e-baf3-688917113063" (UID: "2f6f7544-7d00-409e-baf3-688917113063"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.722053 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f6f7544-7d00-409e-baf3-688917113063" (UID: "2f6f7544-7d00-409e-baf3-688917113063"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.776526 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.776558 4792 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.776568 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4746d\" (UniqueName: \"kubernetes.io/projected/2f6f7544-7d00-409e-baf3-688917113063-kube-api-access-4746d\") on node \"crc\" DevicePath \"\"" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.776577 4792 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 19 17:35:12 crc kubenswrapper[4792]: I0319 17:35:12.776587 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6f7544-7d00-409e-baf3-688917113063-inventory\") on node \"crc\" DevicePath \"\"" Mar 19 17:35:13 crc kubenswrapper[4792]: I0319 17:35:13.095644 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" event={"ID":"2f6f7544-7d00-409e-baf3-688917113063","Type":"ContainerDied","Data":"9a91c3fd915c5bf231595fb2626b1634bdf2cc6491bfe438b8a87012183c06df"} Mar 19 17:35:13 crc kubenswrapper[4792]: I0319 17:35:13.095921 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a91c3fd915c5bf231595fb2626b1634bdf2cc6491bfe438b8a87012183c06df" Mar 19 17:35:13 crc kubenswrapper[4792]: I0319 17:35:13.095678 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8tbh9" Mar 19 17:35:22 crc kubenswrapper[4792]: I0319 17:35:22.742128 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:35:23 crc kubenswrapper[4792]: I0319 17:35:23.254527 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"01540b6dc02fb6945022a3cc1137d899fe9f27aa7853551b37cba7aa134b0297"} Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.210794 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565696-td7dl"] Mar 19 17:36:00 crc kubenswrapper[4792]: E0319 17:36:00.212004 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6f7544-7d00-409e-baf3-688917113063" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.212104 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6f7544-7d00-409e-baf3-688917113063" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.212419 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6f7544-7d00-409e-baf3-688917113063" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.213387 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565696-td7dl" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.215819 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.216045 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.216254 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.222294 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565696-td7dl"] Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.297316 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnfj7\" (UniqueName: \"kubernetes.io/projected/3a6d22ef-8199-407e-8a67-db40065a5533-kube-api-access-rnfj7\") pod \"auto-csr-approver-29565696-td7dl\" (UID: \"3a6d22ef-8199-407e-8a67-db40065a5533\") " pod="openshift-infra/auto-csr-approver-29565696-td7dl" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.399705 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnfj7\" (UniqueName: \"kubernetes.io/projected/3a6d22ef-8199-407e-8a67-db40065a5533-kube-api-access-rnfj7\") pod \"auto-csr-approver-29565696-td7dl\" (UID: \"3a6d22ef-8199-407e-8a67-db40065a5533\") " pod="openshift-infra/auto-csr-approver-29565696-td7dl" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.426103 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnfj7\" (UniqueName: \"kubernetes.io/projected/3a6d22ef-8199-407e-8a67-db40065a5533-kube-api-access-rnfj7\") pod \"auto-csr-approver-29565696-td7dl\" (UID: \"3a6d22ef-8199-407e-8a67-db40065a5533\") " pod="openshift-infra/auto-csr-approver-29565696-td7dl" Mar 19 17:36:00 crc kubenswrapper[4792]: I0319 17:36:00.535397 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565696-td7dl" Mar 19 17:36:01 crc kubenswrapper[4792]: I0319 17:36:01.042930 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565696-td7dl"] Mar 19 17:36:01 crc kubenswrapper[4792]: I0319 17:36:01.754819 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565696-td7dl" event={"ID":"3a6d22ef-8199-407e-8a67-db40065a5533","Type":"ContainerStarted","Data":"7f32a9de0c4fb3399815985da226412b320265569dcdced42f8596cc5c280b5a"} Mar 19 17:36:02 crc kubenswrapper[4792]: I0319 17:36:02.757045 4792 generic.go:334] "Generic (PLEG): container finished" podID="3a6d22ef-8199-407e-8a67-db40065a5533" containerID="0cd4560b02ede1cb3ecb8432cc9fa936e91fc147c07f87ef344562e5743fc8fd" exitCode=0 Mar 19 17:36:02 crc kubenswrapper[4792]: I0319 17:36:02.757108 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565696-td7dl" event={"ID":"3a6d22ef-8199-407e-8a67-db40065a5533","Type":"ContainerDied","Data":"0cd4560b02ede1cb3ecb8432cc9fa936e91fc147c07f87ef344562e5743fc8fd"} Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.386711 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dp5ll"] Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.391885 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.437341 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dp5ll"] Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.474833 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-utilities\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.474928 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vv88\" (UniqueName: \"kubernetes.io/projected/b2c0f30f-5687-41f0-8038-0b18801d9966-kube-api-access-2vv88\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.475000 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-catalog-content\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.576862 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-utilities\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.577246 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vv88\" (UniqueName: \"kubernetes.io/projected/b2c0f30f-5687-41f0-8038-0b18801d9966-kube-api-access-2vv88\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.577279 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-catalog-content\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.577383 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-utilities\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.578042 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-catalog-content\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.597353 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vv88\" (UniqueName: \"kubernetes.io/projected/b2c0f30f-5687-41f0-8038-0b18801d9966-kube-api-access-2vv88\") pod \"redhat-operators-dp5ll\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:03 crc kubenswrapper[4792]: I0319 17:36:03.728267 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.251754 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565696-td7dl" Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.398319 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dp5ll"] Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.408187 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnfj7\" (UniqueName: \"kubernetes.io/projected/3a6d22ef-8199-407e-8a67-db40065a5533-kube-api-access-rnfj7\") pod \"3a6d22ef-8199-407e-8a67-db40065a5533\" (UID: \"3a6d22ef-8199-407e-8a67-db40065a5533\") " Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.413992 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6d22ef-8199-407e-8a67-db40065a5533-kube-api-access-rnfj7" (OuterVolumeSpecName: "kube-api-access-rnfj7") pod "3a6d22ef-8199-407e-8a67-db40065a5533" (UID: "3a6d22ef-8199-407e-8a67-db40065a5533"). InnerVolumeSpecName "kube-api-access-rnfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.511912 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnfj7\" (UniqueName: \"kubernetes.io/projected/3a6d22ef-8199-407e-8a67-db40065a5533-kube-api-access-rnfj7\") on node \"crc\" DevicePath \"\"" Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.784536 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565696-td7dl" event={"ID":"3a6d22ef-8199-407e-8a67-db40065a5533","Type":"ContainerDied","Data":"7f32a9de0c4fb3399815985da226412b320265569dcdced42f8596cc5c280b5a"} Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.784813 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f32a9de0c4fb3399815985da226412b320265569dcdced42f8596cc5c280b5a" Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.784567 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565696-td7dl" Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.787396 4792 generic.go:334] "Generic (PLEG): container finished" podID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerID="0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e" exitCode=0 Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.787436 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp5ll" event={"ID":"b2c0f30f-5687-41f0-8038-0b18801d9966","Type":"ContainerDied","Data":"0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e"} Mar 19 17:36:04 crc kubenswrapper[4792]: I0319 17:36:04.787465 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp5ll" event={"ID":"b2c0f30f-5687-41f0-8038-0b18801d9966","Type":"ContainerStarted","Data":"3cd0d992e180fbb5127b770fabb716b0151dd49d9e6c5b1977f55552c3c7aa95"} Mar 19 17:36:05 crc kubenswrapper[4792]: I0319 17:36:05.326269 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565690-vvwvh"] Mar 19 17:36:05 crc kubenswrapper[4792]: I0319 17:36:05.346737 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565690-vvwvh"] Mar 19 17:36:05 crc kubenswrapper[4792]: I0319 17:36:05.757121 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad7fdd8-b937-4a65-9541-524c61e7daf5" path="/var/lib/kubelet/pods/dad7fdd8-b937-4a65-9541-524c61e7daf5/volumes" Mar 19 17:36:06 crc kubenswrapper[4792]: I0319 17:36:06.814554 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp5ll" event={"ID":"b2c0f30f-5687-41f0-8038-0b18801d9966","Type":"ContainerStarted","Data":"e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92"} Mar 19 17:36:07 crc kubenswrapper[4792]: I0319 17:36:07.716091 4792 scope.go:117] "RemoveContainer" containerID="f8d4de5eac4df59ffcae1d192181ee72e0430a5f01b552161a35865934fae398" Mar 19 17:36:07 crc kubenswrapper[4792]: I0319 17:36:07.762873 4792 scope.go:117] "RemoveContainer" containerID="925dcdb03e2e983282e52d6e027a0d4f3e58e7bc77abc2484484191ba1a2e160" Mar 19 17:36:07 crc kubenswrapper[4792]: I0319 17:36:07.829707 4792 scope.go:117] "RemoveContainer" containerID="58fa7d58b37ce3cbaddc30cc73e9a11718bdab4deb512f4689dd6fd112bc0332" Mar 19 17:36:07 crc kubenswrapper[4792]: I0319 17:36:07.884527 4792 scope.go:117] "RemoveContainer" containerID="54b1e9b43674906fd174b8d40a2a09df7ec0d40224e0f7fc6319c3c1f172ec87" Mar 19 17:36:10 crc kubenswrapper[4792]: I0319 17:36:10.879942 4792 generic.go:334] "Generic (PLEG): container finished" podID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerID="e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92" exitCode=0 Mar 19 17:36:10 crc kubenswrapper[4792]: I0319 17:36:10.879987 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp5ll" event={"ID":"b2c0f30f-5687-41f0-8038-0b18801d9966","Type":"ContainerDied","Data":"e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92"} Mar 19 17:36:11 crc kubenswrapper[4792]: I0319 17:36:11.892106 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp5ll" event={"ID":"b2c0f30f-5687-41f0-8038-0b18801d9966","Type":"ContainerStarted","Data":"5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60"} Mar 19 17:36:11 crc kubenswrapper[4792]: I0319 17:36:11.917377 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dp5ll" podStartSLOduration=2.392930567 podStartE2EDuration="8.917361309s" podCreationTimestamp="2026-03-19 17:36:03 +0000 UTC" firstStartedPulling="2026-03-19 17:36:04.789215169 +0000 UTC m=+3327.935272709" lastFinishedPulling="2026-03-19 17:36:11.313645911 +0000 UTC m=+3334.459703451" observedRunningTime="2026-03-19 17:36:11.914753957 +0000 UTC m=+3335.060811497" watchObservedRunningTime="2026-03-19 17:36:11.917361309 +0000 UTC m=+3335.063418839" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.009503 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v4ghv"] Mar 19 17:36:12 crc kubenswrapper[4792]: E0319 17:36:12.010391 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6d22ef-8199-407e-8a67-db40065a5533" containerName="oc" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.010444 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6d22ef-8199-407e-8a67-db40065a5533" containerName="oc" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.011155 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6d22ef-8199-407e-8a67-db40065a5533" containerName="oc" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.013410 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.023457 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v4ghv"] Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.111603 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-utilities\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.111670 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkp6\" (UniqueName: \"kubernetes.io/projected/016549af-d2e3-4188-b77e-464dc0160bfb-kube-api-access-zbkp6\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.111761 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-catalog-content\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.214076 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkp6\" (UniqueName: \"kubernetes.io/projected/016549af-d2e3-4188-b77e-464dc0160bfb-kube-api-access-zbkp6\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.214173 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-catalog-content\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.214326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-utilities\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.214760 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-utilities\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.215295 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-catalog-content\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.241800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkp6\" (UniqueName: \"kubernetes.io/projected/016549af-d2e3-4188-b77e-464dc0160bfb-kube-api-access-zbkp6\") pod \"redhat-marketplace-v4ghv\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.335870 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:12 crc kubenswrapper[4792]: I0319 17:36:12.894688 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v4ghv"] Mar 19 17:36:13 crc kubenswrapper[4792]: I0319 17:36:13.729080 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:13 crc kubenswrapper[4792]: I0319 17:36:13.729799 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:13 crc kubenswrapper[4792]: I0319 17:36:13.916190 4792 generic.go:334] "Generic (PLEG): container finished" podID="016549af-d2e3-4188-b77e-464dc0160bfb" containerID="16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94" exitCode=0 Mar 19 17:36:13 crc kubenswrapper[4792]: I0319 17:36:13.916293 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v4ghv" event={"ID":"016549af-d2e3-4188-b77e-464dc0160bfb","Type":"ContainerDied","Data":"16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94"} Mar 19 17:36:13 crc kubenswrapper[4792]: I0319 17:36:13.917383 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v4ghv" event={"ID":"016549af-d2e3-4188-b77e-464dc0160bfb","Type":"ContainerStarted","Data":"2cd5e5a0d77565d3295bad4556b310bd923d916d281a3ecba286bfcb92062772"} Mar 19 17:36:14 crc kubenswrapper[4792]: I0319 17:36:14.782191 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dp5ll" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="registry-server" probeResult="failure" output=< Mar 19 17:36:14 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:36:14 crc kubenswrapper[4792]: > Mar 19 17:36:14 crc kubenswrapper[4792]: I0319 17:36:14.929265 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v4ghv" event={"ID":"016549af-d2e3-4188-b77e-464dc0160bfb","Type":"ContainerStarted","Data":"0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6"} Mar 19 17:36:15 crc kubenswrapper[4792]: I0319 17:36:15.939607 4792 generic.go:334] "Generic (PLEG): container finished" podID="016549af-d2e3-4188-b77e-464dc0160bfb" containerID="0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6" exitCode=0 Mar 19 17:36:15 crc kubenswrapper[4792]: I0319 17:36:15.939711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v4ghv" event={"ID":"016549af-d2e3-4188-b77e-464dc0160bfb","Type":"ContainerDied","Data":"0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6"} Mar 19 17:36:16 crc kubenswrapper[4792]: I0319 17:36:16.953419 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v4ghv" event={"ID":"016549af-d2e3-4188-b77e-464dc0160bfb","Type":"ContainerStarted","Data":"0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492"} Mar 19 17:36:16 crc kubenswrapper[4792]: I0319 17:36:16.976343 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v4ghv" podStartSLOduration=3.534562407 podStartE2EDuration="5.976322013s" podCreationTimestamp="2026-03-19 17:36:11 +0000 UTC" firstStartedPulling="2026-03-19 17:36:13.918851241 +0000 UTC m=+3337.064908781" lastFinishedPulling="2026-03-19 17:36:16.360610837 +0000 UTC m=+3339.506668387" observedRunningTime="2026-03-19 17:36:16.973780134 +0000 UTC m=+3340.119837684" watchObservedRunningTime="2026-03-19 17:36:16.976322013 +0000 UTC m=+3340.122379563" Mar 19 17:36:22 crc kubenswrapper[4792]: I0319 17:36:22.337158 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:22 crc kubenswrapper[4792]: I0319 17:36:22.337767 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:22 crc kubenswrapper[4792]: I0319 17:36:22.391650 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:23 crc kubenswrapper[4792]: I0319 17:36:23.070831 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:23 crc kubenswrapper[4792]: I0319 17:36:23.129785 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v4ghv"] Mar 19 17:36:24 crc kubenswrapper[4792]: I0319 17:36:24.786717 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dp5ll" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="registry-server" probeResult="failure" output=< Mar 19 17:36:24 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:36:24 crc kubenswrapper[4792]: > Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.038175 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v4ghv" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" containerName="registry-server" containerID="cri-o://0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492" gracePeriod=2 Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.552785 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.649876 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbkp6\" (UniqueName: \"kubernetes.io/projected/016549af-d2e3-4188-b77e-464dc0160bfb-kube-api-access-zbkp6\") pod \"016549af-d2e3-4188-b77e-464dc0160bfb\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.650069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-utilities\") pod \"016549af-d2e3-4188-b77e-464dc0160bfb\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.650284 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-catalog-content\") pod \"016549af-d2e3-4188-b77e-464dc0160bfb\" (UID: \"016549af-d2e3-4188-b77e-464dc0160bfb\") " Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.654235 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-utilities" (OuterVolumeSpecName: "utilities") pod "016549af-d2e3-4188-b77e-464dc0160bfb" (UID: "016549af-d2e3-4188-b77e-464dc0160bfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.655684 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016549af-d2e3-4188-b77e-464dc0160bfb-kube-api-access-zbkp6" (OuterVolumeSpecName: "kube-api-access-zbkp6") pod "016549af-d2e3-4188-b77e-464dc0160bfb" (UID: "016549af-d2e3-4188-b77e-464dc0160bfb"). InnerVolumeSpecName "kube-api-access-zbkp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.685407 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "016549af-d2e3-4188-b77e-464dc0160bfb" (UID: "016549af-d2e3-4188-b77e-464dc0160bfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.758189 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbkp6\" (UniqueName: \"kubernetes.io/projected/016549af-d2e3-4188-b77e-464dc0160bfb-kube-api-access-zbkp6\") on node \"crc\" DevicePath \"\"" Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.758311 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:36:25 crc kubenswrapper[4792]: I0319 17:36:25.758366 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016549af-d2e3-4188-b77e-464dc0160bfb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.050682 4792 generic.go:334] "Generic (PLEG): container finished" podID="016549af-d2e3-4188-b77e-464dc0160bfb" containerID="0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492" exitCode=0 Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.050765 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v4ghv" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.050793 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v4ghv" event={"ID":"016549af-d2e3-4188-b77e-464dc0160bfb","Type":"ContainerDied","Data":"0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492"} Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.051866 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v4ghv" event={"ID":"016549af-d2e3-4188-b77e-464dc0160bfb","Type":"ContainerDied","Data":"2cd5e5a0d77565d3295bad4556b310bd923d916d281a3ecba286bfcb92062772"} Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.051894 4792 scope.go:117] "RemoveContainer" containerID="0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.103603 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v4ghv"] Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.104819 4792 scope.go:117] "RemoveContainer" containerID="0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.120334 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v4ghv"] Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.132074 4792 scope.go:117] "RemoveContainer" containerID="16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.180384 4792 scope.go:117] "RemoveContainer" containerID="0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492" Mar 19 17:36:26 crc kubenswrapper[4792]: E0319 17:36:26.181077 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492\": container with ID starting with 0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492 not found: ID does not exist" containerID="0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.181119 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492"} err="failed to get container status \"0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492\": rpc error: code = NotFound desc = could not find container \"0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492\": container with ID starting with 0262abd4eaca2e10c470c162fd18a944ad61bf4a94fbd13e4f26034d03d13492 not found: ID does not exist" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.181166 4792 scope.go:117] "RemoveContainer" containerID="0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6" Mar 19 17:36:26 crc kubenswrapper[4792]: E0319 17:36:26.181605 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6\": container with ID starting with 0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6 not found: ID does not exist" containerID="0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.181639 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6"} err="failed to get container status \"0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6\": rpc error: code = NotFound desc = could not find container \"0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6\": container with ID starting with 0711b92fe9ea1a317ab16fd24d527dedb0d0f7d5f84d61dcd7c0a6c28fc10fd6 not found: ID does not exist" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.181659 4792 scope.go:117] "RemoveContainer" containerID="16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94" Mar 19 17:36:26 crc kubenswrapper[4792]: E0319 17:36:26.181946 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94\": container with ID starting with 16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94 not found: ID does not exist" containerID="16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94" Mar 19 17:36:26 crc kubenswrapper[4792]: I0319 17:36:26.181976 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94"} err="failed to get container status \"16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94\": rpc error: code = NotFound desc = could not find container \"16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94\": container with ID starting with 16b3ca9a558e45333ad58ac046072d0db284c03d07320bb13f91d5baad403d94 not found: ID does not exist" Mar 19 17:36:27 crc kubenswrapper[4792]: I0319 17:36:27.771518 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" path="/var/lib/kubelet/pods/016549af-d2e3-4188-b77e-464dc0160bfb/volumes" Mar 19 17:36:34 crc kubenswrapper[4792]: I0319 17:36:34.786270 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dp5ll" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="registry-server" probeResult="failure" output=< Mar 19 17:36:34 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:36:34 crc kubenswrapper[4792]: > Mar 19 17:36:43 crc kubenswrapper[4792]: I0319 17:36:43.785518 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:43 crc kubenswrapper[4792]: I0319 17:36:43.836522 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:44 crc kubenswrapper[4792]: I0319 17:36:44.023795 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dp5ll"] Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.245886 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dp5ll" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="registry-server" containerID="cri-o://5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60" gracePeriod=2 Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.751407 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.835634 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-utilities\") pod \"b2c0f30f-5687-41f0-8038-0b18801d9966\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.835944 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-catalog-content\") pod \"b2c0f30f-5687-41f0-8038-0b18801d9966\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.836038 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vv88\" (UniqueName: \"kubernetes.io/projected/b2c0f30f-5687-41f0-8038-0b18801d9966-kube-api-access-2vv88\") pod \"b2c0f30f-5687-41f0-8038-0b18801d9966\" (UID: \"b2c0f30f-5687-41f0-8038-0b18801d9966\") " Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.841442 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-utilities" (OuterVolumeSpecName: "utilities") pod "b2c0f30f-5687-41f0-8038-0b18801d9966" (UID: "b2c0f30f-5687-41f0-8038-0b18801d9966"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.843427 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c0f30f-5687-41f0-8038-0b18801d9966-kube-api-access-2vv88" (OuterVolumeSpecName: "kube-api-access-2vv88") pod "b2c0f30f-5687-41f0-8038-0b18801d9966" (UID: "b2c0f30f-5687-41f0-8038-0b18801d9966"). InnerVolumeSpecName "kube-api-access-2vv88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.939349 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.939395 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vv88\" (UniqueName: \"kubernetes.io/projected/b2c0f30f-5687-41f0-8038-0b18801d9966-kube-api-access-2vv88\") on node \"crc\" DevicePath \"\"" Mar 19 17:36:45 crc kubenswrapper[4792]: I0319 17:36:45.977199 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2c0f30f-5687-41f0-8038-0b18801d9966" (UID: "b2c0f30f-5687-41f0-8038-0b18801d9966"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.043307 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c0f30f-5687-41f0-8038-0b18801d9966-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.256620 4792 generic.go:334] "Generic (PLEG): container finished" podID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerID="5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60" exitCode=0 Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.256664 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp5ll" event={"ID":"b2c0f30f-5687-41f0-8038-0b18801d9966","Type":"ContainerDied","Data":"5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60"} Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.256683 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp5ll" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.256704 4792 scope.go:117] "RemoveContainer" containerID="5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.256692 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp5ll" event={"ID":"b2c0f30f-5687-41f0-8038-0b18801d9966","Type":"ContainerDied","Data":"3cd0d992e180fbb5127b770fabb716b0151dd49d9e6c5b1977f55552c3c7aa95"} Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.298395 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dp5ll"] Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.298898 4792 scope.go:117] "RemoveContainer" containerID="e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.313501 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dp5ll"] Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.335106 4792 scope.go:117] "RemoveContainer" containerID="0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.381334 4792 scope.go:117] "RemoveContainer" containerID="5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60" Mar 19 17:36:46 crc kubenswrapper[4792]: E0319 17:36:46.381824 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60\": container with ID starting with 5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60 not found: ID does not exist" containerID="5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.381958 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60"} err="failed to get container status \"5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60\": rpc error: code = NotFound desc = could not find container \"5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60\": container with ID starting with 5016da01fcb3eac172e8b87ddb4ad3443d78dba110956149cc31b9b876138a60 not found: ID does not exist" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.381984 4792 scope.go:117] "RemoveContainer" containerID="e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92" Mar 19 17:36:46 crc kubenswrapper[4792]: E0319 17:36:46.382486 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92\": container with ID starting with e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92 not found: ID does not exist" containerID="e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.382540 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92"} err="failed to get container status \"e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92\": rpc error: code = NotFound desc = could not find container \"e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92\": container with ID starting with e70183fc6137d04cd03e2454059fe245b252cc5e8c84a324ef51fd8aa96aea92 not found: ID does not exist" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.382575 4792 scope.go:117] "RemoveContainer" containerID="0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e" Mar 19 17:36:46 crc kubenswrapper[4792]: E0319 17:36:46.383047 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e\": container with ID starting with 0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e not found: ID does not exist" containerID="0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e" Mar 19 17:36:46 crc kubenswrapper[4792]: I0319 17:36:46.383109 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e"} err="failed to get container status \"0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e\": rpc error: code = NotFound desc = could not find container \"0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e\": container with ID starting with 0e84f37869e387ed47f7fc2978c46bfde2e58744b94e6f452ce377bfc6ab848e not found: ID does not exist" Mar 19 17:36:47 crc kubenswrapper[4792]: I0319 17:36:47.751311 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" path="/var/lib/kubelet/pods/b2c0f30f-5687-41f0-8038-0b18801d9966/volumes" Mar 19 17:37:28 crc kubenswrapper[4792]: E0319 17:37:28.144014 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Mar 19 17:37:50 crc kubenswrapper[4792]: I0319 17:37:50.231454 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:37:50 crc kubenswrapper[4792]: I0319 17:37:50.232567 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.158859 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565698-npfxq"] Mar 19 17:38:00 crc kubenswrapper[4792]: E0319 17:38:00.160249 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="extract-content" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.160276 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="extract-content" Mar 19 17:38:00 crc kubenswrapper[4792]: E0319 17:38:00.160290 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" containerName="extract-utilities" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.160303 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" containerName="extract-utilities" Mar 19 17:38:00 crc kubenswrapper[4792]: E0319 17:38:00.160325 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="extract-utilities" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.160338 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="extract-utilities" Mar 19 17:38:00 crc kubenswrapper[4792]: E0319 17:38:00.160374 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" containerName="registry-server" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.160390 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" containerName="registry-server" Mar 19 17:38:00 crc kubenswrapper[4792]: E0319 17:38:00.160441 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="registry-server" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.160453 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="registry-server" Mar 19 17:38:00 crc kubenswrapper[4792]: E0319 17:38:00.160488 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" containerName="extract-content" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.160500 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" containerName="extract-content" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.161074 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c0f30f-5687-41f0-8038-0b18801d9966" containerName="registry-server" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.161107 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="016549af-d2e3-4188-b77e-464dc0160bfb" containerName="registry-server" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.162526 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565698-npfxq" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.167103 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.167633 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.167867 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.172201 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565698-npfxq"] Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.261574 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvh52\" (UniqueName: \"kubernetes.io/projected/cc518b9c-ad72-44b0-8719-09b3eaef9a3a-kube-api-access-pvh52\") pod \"auto-csr-approver-29565698-npfxq\" (UID: \"cc518b9c-ad72-44b0-8719-09b3eaef9a3a\") " pod="openshift-infra/auto-csr-approver-29565698-npfxq" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.364297 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvh52\" (UniqueName: \"kubernetes.io/projected/cc518b9c-ad72-44b0-8719-09b3eaef9a3a-kube-api-access-pvh52\") pod \"auto-csr-approver-29565698-npfxq\" (UID: \"cc518b9c-ad72-44b0-8719-09b3eaef9a3a\") " pod="openshift-infra/auto-csr-approver-29565698-npfxq" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.381168 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvh52\" (UniqueName: \"kubernetes.io/projected/cc518b9c-ad72-44b0-8719-09b3eaef9a3a-kube-api-access-pvh52\") pod \"auto-csr-approver-29565698-npfxq\" (UID: \"cc518b9c-ad72-44b0-8719-09b3eaef9a3a\") " pod="openshift-infra/auto-csr-approver-29565698-npfxq" Mar 19 17:38:00 crc kubenswrapper[4792]: I0319 17:38:00.502145 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565698-npfxq" Mar 19 17:38:01 crc kubenswrapper[4792]: I0319 17:38:01.005744 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565698-npfxq"] Mar 19 17:38:01 crc kubenswrapper[4792]: I0319 17:38:01.187771 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565698-npfxq" event={"ID":"cc518b9c-ad72-44b0-8719-09b3eaef9a3a","Type":"ContainerStarted","Data":"d9911f8d1318fdcd593157835ffc9bee2901aac1813b71ad3fc361aca0c91476"} Mar 19 17:38:03 crc kubenswrapper[4792]: I0319 17:38:03.212310 4792 generic.go:334] "Generic (PLEG): container finished" podID="cc518b9c-ad72-44b0-8719-09b3eaef9a3a" containerID="91ea75519ce3ae0bc8db6f3e2f24ede71ac96d3f589115ec739cc571168a05b7" exitCode=0 Mar 19 17:38:03 crc kubenswrapper[4792]: I0319 17:38:03.212825 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565698-npfxq" event={"ID":"cc518b9c-ad72-44b0-8719-09b3eaef9a3a","Type":"ContainerDied","Data":"91ea75519ce3ae0bc8db6f3e2f24ede71ac96d3f589115ec739cc571168a05b7"} Mar 19 17:38:04 crc kubenswrapper[4792]: I0319 17:38:04.693986 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565698-npfxq" Mar 19 17:38:04 crc kubenswrapper[4792]: I0319 17:38:04.733081 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvh52\" (UniqueName: \"kubernetes.io/projected/cc518b9c-ad72-44b0-8719-09b3eaef9a3a-kube-api-access-pvh52\") pod \"cc518b9c-ad72-44b0-8719-09b3eaef9a3a\" (UID: \"cc518b9c-ad72-44b0-8719-09b3eaef9a3a\") " Mar 19 17:38:04 crc kubenswrapper[4792]: I0319 17:38:04.738746 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc518b9c-ad72-44b0-8719-09b3eaef9a3a-kube-api-access-pvh52" (OuterVolumeSpecName: "kube-api-access-pvh52") pod "cc518b9c-ad72-44b0-8719-09b3eaef9a3a" (UID: "cc518b9c-ad72-44b0-8719-09b3eaef9a3a"). InnerVolumeSpecName "kube-api-access-pvh52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:38:04 crc kubenswrapper[4792]: I0319 17:38:04.835650 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvh52\" (UniqueName: \"kubernetes.io/projected/cc518b9c-ad72-44b0-8719-09b3eaef9a3a-kube-api-access-pvh52\") on node \"crc\" DevicePath \"\"" Mar 19 17:38:05 crc kubenswrapper[4792]: I0319 17:38:05.233976 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565698-npfxq" event={"ID":"cc518b9c-ad72-44b0-8719-09b3eaef9a3a","Type":"ContainerDied","Data":"d9911f8d1318fdcd593157835ffc9bee2901aac1813b71ad3fc361aca0c91476"} Mar 19 17:38:05 crc kubenswrapper[4792]: I0319 17:38:05.234034 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9911f8d1318fdcd593157835ffc9bee2901aac1813b71ad3fc361aca0c91476" Mar 19 17:38:05 crc kubenswrapper[4792]: I0319 17:38:05.234078 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565698-npfxq" Mar 19 17:38:05 crc kubenswrapper[4792]: I0319 17:38:05.773672 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565692-nxv82"] Mar 19 17:38:05 crc kubenswrapper[4792]: I0319 17:38:05.785052 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565692-nxv82"] Mar 19 17:38:07 crc kubenswrapper[4792]: I0319 17:38:07.763447 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870f81ec-aa3a-4385-84b2-1133132fbfd7" path="/var/lib/kubelet/pods/870f81ec-aa3a-4385-84b2-1133132fbfd7/volumes" Mar 19 17:38:08 crc kubenswrapper[4792]: I0319 17:38:08.050429 4792 scope.go:117] "RemoveContainer" containerID="4b5b6f7c72d90b69f628af78ef46b86925971cbf88c3ec8975289be10e6a6bf4" Mar 19 17:38:20 crc kubenswrapper[4792]: I0319 17:38:20.230448 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:38:20 crc kubenswrapper[4792]: I0319 17:38:20.231098 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.230908 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.231483 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.231528 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.232561 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01540b6dc02fb6945022a3cc1137d899fe9f27aa7853551b37cba7aa134b0297"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.232626 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://01540b6dc02fb6945022a3cc1137d899fe9f27aa7853551b37cba7aa134b0297" gracePeriod=600 Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.727049 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="01540b6dc02fb6945022a3cc1137d899fe9f27aa7853551b37cba7aa134b0297" exitCode=0 Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.727152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"01540b6dc02fb6945022a3cc1137d899fe9f27aa7853551b37cba7aa134b0297"} Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.727528 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9"} Mar 19 17:38:50 crc kubenswrapper[4792]: I0319 17:38:50.727556 4792 scope.go:117] "RemoveContainer" containerID="c829f490a97c12b8f9728da921f17efd405ca1b8b8f09ba457d669d67f491629" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.141516 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565700-krdxd"] Mar 19 17:40:00 crc kubenswrapper[4792]: E0319 17:40:00.142627 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc518b9c-ad72-44b0-8719-09b3eaef9a3a" containerName="oc" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.142641 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc518b9c-ad72-44b0-8719-09b3eaef9a3a" containerName="oc" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.142904 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc518b9c-ad72-44b0-8719-09b3eaef9a3a" containerName="oc" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.145035 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565700-krdxd" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.147219 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.147646 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.148322 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.160211 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565700-krdxd"] Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.224307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk6lk\" (UniqueName: \"kubernetes.io/projected/a42b66dd-ecb4-43c3-82d6-f6f686cef17d-kube-api-access-gk6lk\") pod \"auto-csr-approver-29565700-krdxd\" (UID: \"a42b66dd-ecb4-43c3-82d6-f6f686cef17d\") " pod="openshift-infra/auto-csr-approver-29565700-krdxd" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.327182 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk6lk\" (UniqueName: \"kubernetes.io/projected/a42b66dd-ecb4-43c3-82d6-f6f686cef17d-kube-api-access-gk6lk\") pod \"auto-csr-approver-29565700-krdxd\" (UID: \"a42b66dd-ecb4-43c3-82d6-f6f686cef17d\") " pod="openshift-infra/auto-csr-approver-29565700-krdxd" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.344333 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk6lk\" (UniqueName: \"kubernetes.io/projected/a42b66dd-ecb4-43c3-82d6-f6f686cef17d-kube-api-access-gk6lk\") pod \"auto-csr-approver-29565700-krdxd\" (UID: \"a42b66dd-ecb4-43c3-82d6-f6f686cef17d\") " pod="openshift-infra/auto-csr-approver-29565700-krdxd" Mar 19 17:40:00 crc kubenswrapper[4792]: I0319 17:40:00.480609 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565700-krdxd" Mar 19 17:40:01 crc kubenswrapper[4792]: I0319 17:40:01.024733 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565700-krdxd"] Mar 19 17:40:01 crc kubenswrapper[4792]: I0319 17:40:01.028535 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:40:01 crc kubenswrapper[4792]: I0319 17:40:01.514374 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565700-krdxd" event={"ID":"a42b66dd-ecb4-43c3-82d6-f6f686cef17d","Type":"ContainerStarted","Data":"bf6c7f9333ca08cdc7fb45d503dfecf877a838a1801e751438a8028fe1901712"} Mar 19 17:40:03 crc kubenswrapper[4792]: I0319 17:40:03.553545 4792 generic.go:334] "Generic (PLEG): container finished" podID="a42b66dd-ecb4-43c3-82d6-f6f686cef17d" containerID="46ba451a15b07f2e7722a25e697ade02ac0814688c9adab0975108677d002f33" exitCode=0 Mar 19 17:40:03 crc kubenswrapper[4792]: I0319 17:40:03.555260 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565700-krdxd" event={"ID":"a42b66dd-ecb4-43c3-82d6-f6f686cef17d","Type":"ContainerDied","Data":"46ba451a15b07f2e7722a25e697ade02ac0814688c9adab0975108677d002f33"} Mar 19 17:40:05 crc kubenswrapper[4792]: I0319 17:40:05.101963 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565700-krdxd" Mar 19 17:40:05 crc kubenswrapper[4792]: I0319 17:40:05.250450 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk6lk\" (UniqueName: \"kubernetes.io/projected/a42b66dd-ecb4-43c3-82d6-f6f686cef17d-kube-api-access-gk6lk\") pod \"a42b66dd-ecb4-43c3-82d6-f6f686cef17d\" (UID: \"a42b66dd-ecb4-43c3-82d6-f6f686cef17d\") " Mar 19 17:40:05 crc kubenswrapper[4792]: I0319 17:40:05.257093 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42b66dd-ecb4-43c3-82d6-f6f686cef17d-kube-api-access-gk6lk" (OuterVolumeSpecName: "kube-api-access-gk6lk") pod "a42b66dd-ecb4-43c3-82d6-f6f686cef17d" (UID: "a42b66dd-ecb4-43c3-82d6-f6f686cef17d"). InnerVolumeSpecName "kube-api-access-gk6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:40:05 crc kubenswrapper[4792]: I0319 17:40:05.353123 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk6lk\" (UniqueName: \"kubernetes.io/projected/a42b66dd-ecb4-43c3-82d6-f6f686cef17d-kube-api-access-gk6lk\") on node \"crc\" DevicePath \"\"" Mar 19 17:40:05 crc kubenswrapper[4792]: I0319 17:40:05.583217 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565700-krdxd" event={"ID":"a42b66dd-ecb4-43c3-82d6-f6f686cef17d","Type":"ContainerDied","Data":"bf6c7f9333ca08cdc7fb45d503dfecf877a838a1801e751438a8028fe1901712"} Mar 19 17:40:05 crc kubenswrapper[4792]: I0319 17:40:05.583260 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6c7f9333ca08cdc7fb45d503dfecf877a838a1801e751438a8028fe1901712" Mar 19 17:40:05 crc kubenswrapper[4792]: I0319 17:40:05.583316 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565700-krdxd" Mar 19 17:40:06 crc kubenswrapper[4792]: I0319 17:40:06.183126 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565694-47zcj"] Mar 19 17:40:06 crc kubenswrapper[4792]: I0319 17:40:06.197460 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565694-47zcj"] Mar 19 17:40:07 crc kubenswrapper[4792]: I0319 17:40:07.762352 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1584de-582a-43d3-82bb-095ff16157a6" path="/var/lib/kubelet/pods/ea1584de-582a-43d3-82bb-095ff16157a6/volumes" Mar 19 17:40:08 crc kubenswrapper[4792]: I0319 17:40:08.184592 4792 scope.go:117] "RemoveContainer" containerID="2e480ee562bb80b7dcd2cdf6d26cece3b778d087a351572d77c74325302cbd2d" Mar 19 17:40:50 crc kubenswrapper[4792]: I0319 17:40:50.230487 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:40:50 crc kubenswrapper[4792]: I0319 17:40:50.231217 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:41:20 crc kubenswrapper[4792]: I0319 17:41:20.230881 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:41:20 crc kubenswrapper[4792]: I0319 17:41:20.233002 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.231369 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.232174 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.232242 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.233505 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.233603 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" gracePeriod=600 Mar 19 17:41:50 crc kubenswrapper[4792]: E0319 17:41:50.368630 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.921112 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" exitCode=0 Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.921159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9"} Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.921195 4792 scope.go:117] "RemoveContainer" containerID="01540b6dc02fb6945022a3cc1137d899fe9f27aa7853551b37cba7aa134b0297" Mar 19 17:41:50 crc kubenswrapper[4792]: I0319 17:41:50.922253 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:41:50 crc kubenswrapper[4792]: E0319 17:41:50.922646 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.173240 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565702-96zpc"] Mar 19 17:42:00 crc kubenswrapper[4792]: E0319 17:42:00.174425 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42b66dd-ecb4-43c3-82d6-f6f686cef17d" containerName="oc" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.174442 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42b66dd-ecb4-43c3-82d6-f6f686cef17d" containerName="oc" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.174666 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42b66dd-ecb4-43c3-82d6-f6f686cef17d" containerName="oc" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.175467 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565702-96zpc" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.182578 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.183236 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.183411 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.200116 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565702-96zpc"] Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.303262 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkkrh\" (UniqueName: \"kubernetes.io/projected/20942f5e-95d8-43c8-94a8-d271f9685020-kube-api-access-rkkrh\") pod \"auto-csr-approver-29565702-96zpc\" (UID: \"20942f5e-95d8-43c8-94a8-d271f9685020\") " pod="openshift-infra/auto-csr-approver-29565702-96zpc" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.405450 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkrh\" (UniqueName: \"kubernetes.io/projected/20942f5e-95d8-43c8-94a8-d271f9685020-kube-api-access-rkkrh\") pod \"auto-csr-approver-29565702-96zpc\" (UID: \"20942f5e-95d8-43c8-94a8-d271f9685020\") " pod="openshift-infra/auto-csr-approver-29565702-96zpc" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.439873 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkrh\" (UniqueName: \"kubernetes.io/projected/20942f5e-95d8-43c8-94a8-d271f9685020-kube-api-access-rkkrh\") pod \"auto-csr-approver-29565702-96zpc\" (UID: \"20942f5e-95d8-43c8-94a8-d271f9685020\") " pod="openshift-infra/auto-csr-approver-29565702-96zpc" Mar 19 17:42:00 crc kubenswrapper[4792]: I0319 17:42:00.500142 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565702-96zpc" Mar 19 17:42:01 crc kubenswrapper[4792]: I0319 17:42:01.066551 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565702-96zpc"] Mar 19 17:42:02 crc kubenswrapper[4792]: I0319 17:42:02.111206 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565702-96zpc" event={"ID":"20942f5e-95d8-43c8-94a8-d271f9685020","Type":"ContainerStarted","Data":"98ad6a9c0633b7dc96aa57625444b6b22bb7d3bcdd921e81f919a7ee9524c735"} Mar 19 17:42:02 crc kubenswrapper[4792]: I0319 17:42:02.739787 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:42:02 crc kubenswrapper[4792]: E0319 17:42:02.740335 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:42:03 crc kubenswrapper[4792]: I0319 17:42:03.123451 4792 generic.go:334] "Generic (PLEG): container finished" podID="20942f5e-95d8-43c8-94a8-d271f9685020" containerID="06a5a18e1e09d1ec53192667f2e20264cc76ce12f9ae4fbb749ef504ee9ffcee" exitCode=0 Mar 19 17:42:03 crc kubenswrapper[4792]: I0319 17:42:03.123499 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565702-96zpc" event={"ID":"20942f5e-95d8-43c8-94a8-d271f9685020","Type":"ContainerDied","Data":"06a5a18e1e09d1ec53192667f2e20264cc76ce12f9ae4fbb749ef504ee9ffcee"} Mar 19 17:42:04 crc kubenswrapper[4792]: I0319 17:42:04.544386 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565702-96zpc" Mar 19 17:42:04 crc kubenswrapper[4792]: I0319 17:42:04.644041 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkkrh\" (UniqueName: \"kubernetes.io/projected/20942f5e-95d8-43c8-94a8-d271f9685020-kube-api-access-rkkrh\") pod \"20942f5e-95d8-43c8-94a8-d271f9685020\" (UID: \"20942f5e-95d8-43c8-94a8-d271f9685020\") " Mar 19 17:42:04 crc kubenswrapper[4792]: I0319 17:42:04.655107 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20942f5e-95d8-43c8-94a8-d271f9685020-kube-api-access-rkkrh" (OuterVolumeSpecName: "kube-api-access-rkkrh") pod "20942f5e-95d8-43c8-94a8-d271f9685020" (UID: "20942f5e-95d8-43c8-94a8-d271f9685020"). InnerVolumeSpecName "kube-api-access-rkkrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:42:04 crc kubenswrapper[4792]: I0319 17:42:04.747148 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkkrh\" (UniqueName: \"kubernetes.io/projected/20942f5e-95d8-43c8-94a8-d271f9685020-kube-api-access-rkkrh\") on node \"crc\" DevicePath \"\"" Mar 19 17:42:05 crc kubenswrapper[4792]: I0319 17:42:05.171436 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565702-96zpc" event={"ID":"20942f5e-95d8-43c8-94a8-d271f9685020","Type":"ContainerDied","Data":"98ad6a9c0633b7dc96aa57625444b6b22bb7d3bcdd921e81f919a7ee9524c735"} Mar 19 17:42:05 crc kubenswrapper[4792]: I0319 17:42:05.172505 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ad6a9c0633b7dc96aa57625444b6b22bb7d3bcdd921e81f919a7ee9524c735" Mar 19 17:42:05 crc kubenswrapper[4792]: I0319 17:42:05.171509 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565702-96zpc" Mar 19 17:42:05 crc kubenswrapper[4792]: I0319 17:42:05.646589 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565696-td7dl"] Mar 19 17:42:05 crc kubenswrapper[4792]: I0319 17:42:05.660406 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565696-td7dl"] Mar 19 17:42:05 crc kubenswrapper[4792]: I0319 17:42:05.763101 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6d22ef-8199-407e-8a67-db40065a5533" path="/var/lib/kubelet/pods/3a6d22ef-8199-407e-8a67-db40065a5533/volumes" Mar 19 17:42:08 crc kubenswrapper[4792]: I0319 17:42:08.295622 4792 scope.go:117] "RemoveContainer" containerID="0cd4560b02ede1cb3ecb8432cc9fa936e91fc147c07f87ef344562e5743fc8fd" Mar 19 17:42:17 crc kubenswrapper[4792]: I0319 17:42:17.754078 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:42:17 crc kubenswrapper[4792]: E0319 17:42:17.755324 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:42:30 crc kubenswrapper[4792]: I0319 17:42:30.741028 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:42:30 crc kubenswrapper[4792]: E0319 17:42:30.742942 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:42:43 crc kubenswrapper[4792]: I0319 17:42:43.739630 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:42:43 crc kubenswrapper[4792]: E0319 17:42:43.740905 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:42:56 crc kubenswrapper[4792]: I0319 17:42:56.739979 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:42:56 crc kubenswrapper[4792]: E0319 17:42:56.741224 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:43:10 crc kubenswrapper[4792]: I0319 17:43:10.740330 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:43:10 crc kubenswrapper[4792]: E0319 17:43:10.742928 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:43:23 crc kubenswrapper[4792]: I0319 17:43:23.742372 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:43:23 crc kubenswrapper[4792]: E0319 17:43:23.743307 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:43:37 crc kubenswrapper[4792]: I0319 17:43:37.761332 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:43:37 crc kubenswrapper[4792]: E0319 17:43:37.765030 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:43:49 crc kubenswrapper[4792]: I0319 17:43:49.744430 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:43:49 crc kubenswrapper[4792]: E0319 17:43:49.745970 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.155106 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565704-2hz6q"] Mar 19 17:44:00 crc kubenswrapper[4792]: E0319 17:44:00.156861 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20942f5e-95d8-43c8-94a8-d271f9685020" containerName="oc" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.156881 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="20942f5e-95d8-43c8-94a8-d271f9685020" containerName="oc" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.157269 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="20942f5e-95d8-43c8-94a8-d271f9685020" containerName="oc" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.158431 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565704-2hz6q" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.161024 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.161268 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.161589 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.166196 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565704-2hz6q"] Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.335715 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444kn\" (UniqueName: \"kubernetes.io/projected/3a88334b-b09f-4cce-b4d0-bb07253f1223-kube-api-access-444kn\") pod \"auto-csr-approver-29565704-2hz6q\" (UID: \"3a88334b-b09f-4cce-b4d0-bb07253f1223\") " pod="openshift-infra/auto-csr-approver-29565704-2hz6q" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.441033 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-444kn\" (UniqueName: \"kubernetes.io/projected/3a88334b-b09f-4cce-b4d0-bb07253f1223-kube-api-access-444kn\") pod \"auto-csr-approver-29565704-2hz6q\" (UID: \"3a88334b-b09f-4cce-b4d0-bb07253f1223\") " pod="openshift-infra/auto-csr-approver-29565704-2hz6q" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.463320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-444kn\" (UniqueName: \"kubernetes.io/projected/3a88334b-b09f-4cce-b4d0-bb07253f1223-kube-api-access-444kn\") pod \"auto-csr-approver-29565704-2hz6q\" (UID: \"3a88334b-b09f-4cce-b4d0-bb07253f1223\") " pod="openshift-infra/auto-csr-approver-29565704-2hz6q" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.493470 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565704-2hz6q" Mar 19 17:44:00 crc kubenswrapper[4792]: I0319 17:44:00.828763 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565704-2hz6q"] Mar 19 17:44:01 crc kubenswrapper[4792]: I0319 17:44:01.616296 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565704-2hz6q" event={"ID":"3a88334b-b09f-4cce-b4d0-bb07253f1223","Type":"ContainerStarted","Data":"06d53855777ed3be1c716fb3d4a6ee943e1303b512d375fb8eb85196310dc22c"} Mar 19 17:44:01 crc kubenswrapper[4792]: I0319 17:44:01.745212 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:44:01 crc kubenswrapper[4792]: E0319 17:44:01.745483 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:44:02 crc kubenswrapper[4792]: I0319 17:44:02.628548 4792 generic.go:334] "Generic (PLEG): container finished" podID="3a88334b-b09f-4cce-b4d0-bb07253f1223" containerID="ff04eb66a4ee57689a03d149feef8e2e54736d429124ca514b201582b2420b24" exitCode=0 Mar 19 17:44:02 crc kubenswrapper[4792]: I0319 17:44:02.628616 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565704-2hz6q" event={"ID":"3a88334b-b09f-4cce-b4d0-bb07253f1223","Type":"ContainerDied","Data":"ff04eb66a4ee57689a03d149feef8e2e54736d429124ca514b201582b2420b24"} Mar 19 17:44:04 crc kubenswrapper[4792]: I0319 17:44:04.085765 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565704-2hz6q" Mar 19 17:44:04 crc kubenswrapper[4792]: I0319 17:44:04.230473 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-444kn\" (UniqueName: \"kubernetes.io/projected/3a88334b-b09f-4cce-b4d0-bb07253f1223-kube-api-access-444kn\") pod \"3a88334b-b09f-4cce-b4d0-bb07253f1223\" (UID: \"3a88334b-b09f-4cce-b4d0-bb07253f1223\") " Mar 19 17:44:04 crc kubenswrapper[4792]: I0319 17:44:04.242667 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a88334b-b09f-4cce-b4d0-bb07253f1223-kube-api-access-444kn" (OuterVolumeSpecName: "kube-api-access-444kn") pod "3a88334b-b09f-4cce-b4d0-bb07253f1223" (UID: "3a88334b-b09f-4cce-b4d0-bb07253f1223"). InnerVolumeSpecName "kube-api-access-444kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:44:04 crc kubenswrapper[4792]: I0319 17:44:04.333633 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-444kn\" (UniqueName: \"kubernetes.io/projected/3a88334b-b09f-4cce-b4d0-bb07253f1223-kube-api-access-444kn\") on node \"crc\" DevicePath \"\"" Mar 19 17:44:04 crc kubenswrapper[4792]: I0319 17:44:04.653026 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565704-2hz6q" event={"ID":"3a88334b-b09f-4cce-b4d0-bb07253f1223","Type":"ContainerDied","Data":"06d53855777ed3be1c716fb3d4a6ee943e1303b512d375fb8eb85196310dc22c"} Mar 19 17:44:04 crc kubenswrapper[4792]: I0319 17:44:04.653069 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d53855777ed3be1c716fb3d4a6ee943e1303b512d375fb8eb85196310dc22c" Mar 19 17:44:04 crc kubenswrapper[4792]: I0319 17:44:04.653084 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565704-2hz6q" Mar 19 17:44:05 crc kubenswrapper[4792]: I0319 17:44:05.179023 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565698-npfxq"] Mar 19 17:44:05 crc kubenswrapper[4792]: I0319 17:44:05.192769 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565698-npfxq"] Mar 19 17:44:05 crc kubenswrapper[4792]: I0319 17:44:05.752466 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc518b9c-ad72-44b0-8719-09b3eaef9a3a" path="/var/lib/kubelet/pods/cc518b9c-ad72-44b0-8719-09b3eaef9a3a/volumes" Mar 19 17:44:08 crc kubenswrapper[4792]: I0319 17:44:08.420508 4792 scope.go:117] "RemoveContainer" containerID="91ea75519ce3ae0bc8db6f3e2f24ede71ac96d3f589115ec739cc571168a05b7" Mar 19 17:44:13 crc kubenswrapper[4792]: I0319 17:44:13.740565 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:44:13 crc kubenswrapper[4792]: E0319 17:44:13.741698 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:44:28 crc kubenswrapper[4792]: I0319 17:44:28.739612 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:44:28 crc kubenswrapper[4792]: E0319 17:44:28.740585 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:44:39 crc kubenswrapper[4792]: I0319 17:44:39.740594 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:44:39 crc kubenswrapper[4792]: E0319 17:44:39.741996 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:44:53 crc kubenswrapper[4792]: I0319 17:44:53.740918 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:44:53 crc kubenswrapper[4792]: E0319 17:44:53.744911 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.162024 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5"] Mar 19 17:45:00 crc kubenswrapper[4792]: E0319 17:45:00.163252 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a88334b-b09f-4cce-b4d0-bb07253f1223" containerName="oc" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.163268 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a88334b-b09f-4cce-b4d0-bb07253f1223" containerName="oc" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.163615 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a88334b-b09f-4cce-b4d0-bb07253f1223" containerName="oc" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.164628 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.168689 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.168778 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.176441 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5"] Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.359358 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-secret-volume\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.359518 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsh4x\" (UniqueName: \"kubernetes.io/projected/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-kube-api-access-fsh4x\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.360073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-config-volume\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.462927 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-secret-volume\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.462985 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsh4x\" (UniqueName: \"kubernetes.io/projected/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-kube-api-access-fsh4x\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.463072 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-config-volume\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.464152 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-config-volume\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.474125 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-secret-volume\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.481046 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsh4x\" (UniqueName: \"kubernetes.io/projected/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-kube-api-access-fsh4x\") pod \"collect-profiles-29565705-4h2w5\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:00 crc kubenswrapper[4792]: I0319 17:45:00.499539 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:01 crc kubenswrapper[4792]: I0319 17:45:01.268289 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5"] Mar 19 17:45:01 crc kubenswrapper[4792]: I0319 17:45:01.332633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" event={"ID":"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4","Type":"ContainerStarted","Data":"1da9a19d12df122e38ae93f177bd37a1f367c9743e0389b95cb77e510da165e1"} Mar 19 17:45:02 crc kubenswrapper[4792]: I0319 17:45:02.343406 4792 generic.go:334] "Generic (PLEG): container finished" podID="83bbb3cd-8b64-4d41-9f5d-47ce462b1df4" containerID="4814aa9b7c8ae23fb2b090a8242add0af0b256adbe04db7cef3a49877ff3c0f4" exitCode=0 Mar 19 17:45:02 crc kubenswrapper[4792]: I0319 17:45:02.343462 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" event={"ID":"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4","Type":"ContainerDied","Data":"4814aa9b7c8ae23fb2b090a8242add0af0b256adbe04db7cef3a49877ff3c0f4"} Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.775802 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.855634 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-config-volume\") pod \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.855691 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsh4x\" (UniqueName: \"kubernetes.io/projected/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-kube-api-access-fsh4x\") pod \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.855777 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-secret-volume\") pod \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\" (UID: \"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4\") " Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.856249 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-config-volume" (OuterVolumeSpecName: "config-volume") pod "83bbb3cd-8b64-4d41-9f5d-47ce462b1df4" (UID: "83bbb3cd-8b64-4d41-9f5d-47ce462b1df4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.861479 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83bbb3cd-8b64-4d41-9f5d-47ce462b1df4" (UID: "83bbb3cd-8b64-4d41-9f5d-47ce462b1df4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.861794 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-kube-api-access-fsh4x" (OuterVolumeSpecName: "kube-api-access-fsh4x") pod "83bbb3cd-8b64-4d41-9f5d-47ce462b1df4" (UID: "83bbb3cd-8b64-4d41-9f5d-47ce462b1df4"). InnerVolumeSpecName "kube-api-access-fsh4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.958473 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.958504 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsh4x\" (UniqueName: \"kubernetes.io/projected/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-kube-api-access-fsh4x\") on node \"crc\" DevicePath \"\"" Mar 19 17:45:03 crc kubenswrapper[4792]: I0319 17:45:03.958515 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83bbb3cd-8b64-4d41-9f5d-47ce462b1df4-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 19 17:45:04 crc kubenswrapper[4792]: I0319 17:45:04.369917 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" event={"ID":"83bbb3cd-8b64-4d41-9f5d-47ce462b1df4","Type":"ContainerDied","Data":"1da9a19d12df122e38ae93f177bd37a1f367c9743e0389b95cb77e510da165e1"} Mar 19 17:45:04 crc kubenswrapper[4792]: I0319 17:45:04.369966 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1da9a19d12df122e38ae93f177bd37a1f367c9743e0389b95cb77e510da165e1" Mar 19 17:45:04 crc kubenswrapper[4792]: I0319 17:45:04.369982 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565705-4h2w5" Mar 19 17:45:04 crc kubenswrapper[4792]: I0319 17:45:04.739731 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:45:04 crc kubenswrapper[4792]: E0319 17:45:04.740265 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:45:04 crc kubenswrapper[4792]: I0319 17:45:04.859735 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt"] Mar 19 17:45:04 crc kubenswrapper[4792]: I0319 17:45:04.878705 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565660-2rpqt"] Mar 19 17:45:05 crc kubenswrapper[4792]: I0319 17:45:05.755136 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be29048b-fc27-449a-abb9-b15ef75ca132" path="/var/lib/kubelet/pods/be29048b-fc27-449a-abb9-b15ef75ca132/volumes" Mar 19 17:45:08 crc kubenswrapper[4792]: I0319 17:45:08.534152 4792 scope.go:117] "RemoveContainer" containerID="2dd77ce8f6ee3728227df8d49b4267c7afa147e80400b20834e19c9269be6d44" Mar 19 17:45:17 crc kubenswrapper[4792]: I0319 17:45:17.753186 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:45:17 crc kubenswrapper[4792]: E0319 17:45:17.754454 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:45:28 crc kubenswrapper[4792]: I0319 17:45:28.740681 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:45:28 crc kubenswrapper[4792]: E0319 17:45:28.741763 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:45:39 crc kubenswrapper[4792]: I0319 17:45:39.743154 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:45:39 crc kubenswrapper[4792]: E0319 17:45:39.744264 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:45:43 crc kubenswrapper[4792]: E0319 17:45:43.217115 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.222:39260->38.102.83.222:39595: write tcp 38.102.83.222:39260->38.102.83.222:39595: write: broken pipe Mar 19 17:45:53 crc kubenswrapper[4792]: I0319 17:45:53.745896 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:45:53 crc kubenswrapper[4792]: E0319 17:45:53.750493 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.147369 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565706-tsqq8"] Mar 19 17:46:00 crc kubenswrapper[4792]: E0319 17:46:00.148863 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83bbb3cd-8b64-4d41-9f5d-47ce462b1df4" containerName="collect-profiles" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.148885 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bbb3cd-8b64-4d41-9f5d-47ce462b1df4" containerName="collect-profiles" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.149173 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="83bbb3cd-8b64-4d41-9f5d-47ce462b1df4" containerName="collect-profiles" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.150955 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565706-tsqq8" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.153184 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.153199 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.153658 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.160465 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565706-tsqq8"] Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.207085 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746xh\" (UniqueName: \"kubernetes.io/projected/1e9d1348-e5a6-4a1b-9167-c16e45cc0202-kube-api-access-746xh\") pod \"auto-csr-approver-29565706-tsqq8\" (UID: \"1e9d1348-e5a6-4a1b-9167-c16e45cc0202\") " pod="openshift-infra/auto-csr-approver-29565706-tsqq8" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.310819 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746xh\" (UniqueName: \"kubernetes.io/projected/1e9d1348-e5a6-4a1b-9167-c16e45cc0202-kube-api-access-746xh\") pod \"auto-csr-approver-29565706-tsqq8\" (UID: \"1e9d1348-e5a6-4a1b-9167-c16e45cc0202\") " pod="openshift-infra/auto-csr-approver-29565706-tsqq8" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.330936 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746xh\" (UniqueName: \"kubernetes.io/projected/1e9d1348-e5a6-4a1b-9167-c16e45cc0202-kube-api-access-746xh\") pod \"auto-csr-approver-29565706-tsqq8\" (UID: \"1e9d1348-e5a6-4a1b-9167-c16e45cc0202\") " pod="openshift-infra/auto-csr-approver-29565706-tsqq8" Mar 19 17:46:00 crc kubenswrapper[4792]: I0319 17:46:00.477243 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565706-tsqq8" Mar 19 17:46:01 crc kubenswrapper[4792]: I0319 17:46:01.226539 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565706-tsqq8"] Mar 19 17:46:01 crc kubenswrapper[4792]: I0319 17:46:01.238558 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:46:02 crc kubenswrapper[4792]: I0319 17:46:02.074165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565706-tsqq8" event={"ID":"1e9d1348-e5a6-4a1b-9167-c16e45cc0202","Type":"ContainerStarted","Data":"5881c03c4a473a4059636a9119273f47a4bd86781218f534d1633d8fa1c7cf8c"} Mar 19 17:46:03 crc kubenswrapper[4792]: I0319 17:46:03.086325 4792 generic.go:334] "Generic (PLEG): container finished" podID="1e9d1348-e5a6-4a1b-9167-c16e45cc0202" containerID="b005ca739ea08e2cd5b6ffaf4b6fae2ac246be582e3c7575e00522909b9ed406" exitCode=0 Mar 19 17:46:03 crc kubenswrapper[4792]: I0319 17:46:03.086426 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565706-tsqq8" event={"ID":"1e9d1348-e5a6-4a1b-9167-c16e45cc0202","Type":"ContainerDied","Data":"b005ca739ea08e2cd5b6ffaf4b6fae2ac246be582e3c7575e00522909b9ed406"} Mar 19 17:46:04 crc kubenswrapper[4792]: I0319 17:46:04.646250 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565706-tsqq8" Mar 19 17:46:04 crc kubenswrapper[4792]: I0319 17:46:04.742257 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746xh\" (UniqueName: \"kubernetes.io/projected/1e9d1348-e5a6-4a1b-9167-c16e45cc0202-kube-api-access-746xh\") pod \"1e9d1348-e5a6-4a1b-9167-c16e45cc0202\" (UID: \"1e9d1348-e5a6-4a1b-9167-c16e45cc0202\") " Mar 19 17:46:04 crc kubenswrapper[4792]: I0319 17:46:04.751296 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9d1348-e5a6-4a1b-9167-c16e45cc0202-kube-api-access-746xh" (OuterVolumeSpecName: "kube-api-access-746xh") pod "1e9d1348-e5a6-4a1b-9167-c16e45cc0202" (UID: "1e9d1348-e5a6-4a1b-9167-c16e45cc0202"). InnerVolumeSpecName "kube-api-access-746xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:46:04 crc kubenswrapper[4792]: I0319 17:46:04.845621 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746xh\" (UniqueName: \"kubernetes.io/projected/1e9d1348-e5a6-4a1b-9167-c16e45cc0202-kube-api-access-746xh\") on node \"crc\" DevicePath \"\"" Mar 19 17:46:05 crc kubenswrapper[4792]: I0319 17:46:05.118661 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565706-tsqq8" event={"ID":"1e9d1348-e5a6-4a1b-9167-c16e45cc0202","Type":"ContainerDied","Data":"5881c03c4a473a4059636a9119273f47a4bd86781218f534d1633d8fa1c7cf8c"} Mar 19 17:46:05 crc kubenswrapper[4792]: I0319 17:46:05.118706 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5881c03c4a473a4059636a9119273f47a4bd86781218f534d1633d8fa1c7cf8c" Mar 19 17:46:05 crc kubenswrapper[4792]: I0319 17:46:05.119172 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565706-tsqq8" Mar 19 17:46:05 crc kubenswrapper[4792]: I0319 17:46:05.741297 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:46:05 crc kubenswrapper[4792]: E0319 17:46:05.741586 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:46:05 crc kubenswrapper[4792]: I0319 17:46:05.751756 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565700-krdxd"] Mar 19 17:46:05 crc kubenswrapper[4792]: I0319 17:46:05.760932 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565700-krdxd"] Mar 19 17:46:07 crc kubenswrapper[4792]: I0319 17:46:07.755979 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42b66dd-ecb4-43c3-82d6-f6f686cef17d" path="/var/lib/kubelet/pods/a42b66dd-ecb4-43c3-82d6-f6f686cef17d/volumes" Mar 19 17:46:08 crc kubenswrapper[4792]: I0319 17:46:08.642379 4792 scope.go:117] "RemoveContainer" containerID="46ba451a15b07f2e7722a25e697ade02ac0814688c9adab0975108677d002f33" Mar 19 17:46:16 crc kubenswrapper[4792]: I0319 17:46:16.740443 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:46:16 crc kubenswrapper[4792]: E0319 17:46:16.741410 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:46:31 crc kubenswrapper[4792]: I0319 17:46:31.740748 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:46:31 crc kubenswrapper[4792]: E0319 17:46:31.742537 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:46:46 crc kubenswrapper[4792]: I0319 17:46:46.740445 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:46:46 crc kubenswrapper[4792]: E0319 17:46:46.741386 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:46:57 crc kubenswrapper[4792]: I0319 17:46:57.747558 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:46:58 crc kubenswrapper[4792]: I0319 17:46:58.776076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"edd4f1b44628e421771b3f63852beda9bb33be34a68db36d09092c01b872cad9"} Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.722930 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fp56f"] Mar 19 17:47:16 crc kubenswrapper[4792]: E0319 17:47:16.723940 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9d1348-e5a6-4a1b-9167-c16e45cc0202" containerName="oc" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.723952 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9d1348-e5a6-4a1b-9167-c16e45cc0202" containerName="oc" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.724167 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e9d1348-e5a6-4a1b-9167-c16e45cc0202" containerName="oc" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.725793 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.751855 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fp56f"] Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.827929 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22l4\" (UniqueName: \"kubernetes.io/projected/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-kube-api-access-f22l4\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.828068 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-catalog-content\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.828284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-utilities\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.931257 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-catalog-content\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.931371 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-utilities\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.931553 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22l4\" (UniqueName: \"kubernetes.io/projected/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-kube-api-access-f22l4\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.931761 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-catalog-content\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.932016 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-utilities\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:16 crc kubenswrapper[4792]: I0319 17:47:16.951588 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22l4\" (UniqueName: \"kubernetes.io/projected/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-kube-api-access-f22l4\") pod \"redhat-operators-fp56f\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:17 crc kubenswrapper[4792]: I0319 17:47:17.069918 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:17 crc kubenswrapper[4792]: I0319 17:47:17.617537 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fp56f"] Mar 19 17:47:17 crc kubenswrapper[4792]: I0319 17:47:17.992422 4792 generic.go:334] "Generic (PLEG): container finished" podID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerID="cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b" exitCode=0 Mar 19 17:47:17 crc kubenswrapper[4792]: I0319 17:47:17.992775 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp56f" event={"ID":"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c","Type":"ContainerDied","Data":"cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b"} Mar 19 17:47:17 crc kubenswrapper[4792]: I0319 17:47:17.992812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp56f" event={"ID":"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c","Type":"ContainerStarted","Data":"d97d490225428a095490a11f6a67fe76d384db90e955c47d480f59783819b46f"} Mar 19 17:47:19 crc kubenswrapper[4792]: I0319 17:47:19.006986 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp56f" event={"ID":"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c","Type":"ContainerStarted","Data":"3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a"} Mar 19 17:47:24 crc kubenswrapper[4792]: I0319 17:47:24.059828 4792 generic.go:334] "Generic (PLEG): container finished" podID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerID="3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a" exitCode=0 Mar 19 17:47:24 crc kubenswrapper[4792]: I0319 17:47:24.059942 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp56f" event={"ID":"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c","Type":"ContainerDied","Data":"3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a"} Mar 19 17:47:25 crc kubenswrapper[4792]: I0319 17:47:25.074793 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp56f" event={"ID":"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c","Type":"ContainerStarted","Data":"125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69"} Mar 19 17:47:25 crc kubenswrapper[4792]: I0319 17:47:25.101401 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fp56f" podStartSLOduration=2.3851930230000002 podStartE2EDuration="9.101380409s" podCreationTimestamp="2026-03-19 17:47:16 +0000 UTC" firstStartedPulling="2026-03-19 17:47:17.99504139 +0000 UTC m=+4001.141098930" lastFinishedPulling="2026-03-19 17:47:24.711228776 +0000 UTC m=+4007.857286316" observedRunningTime="2026-03-19 17:47:25.096129895 +0000 UTC m=+4008.242187445" watchObservedRunningTime="2026-03-19 17:47:25.101380409 +0000 UTC m=+4008.247437959" Mar 19 17:47:27 crc kubenswrapper[4792]: I0319 17:47:27.070081 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:27 crc kubenswrapper[4792]: I0319 17:47:27.070387 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:28 crc kubenswrapper[4792]: I0319 17:47:28.135438 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fp56f" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="registry-server" probeResult="failure" output=< Mar 19 17:47:28 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:47:28 crc kubenswrapper[4792]: > Mar 19 17:47:38 crc kubenswrapper[4792]: I0319 17:47:38.121646 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fp56f" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="registry-server" probeResult="failure" output=< Mar 19 17:47:38 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:47:38 crc kubenswrapper[4792]: > Mar 19 17:47:48 crc kubenswrapper[4792]: I0319 17:47:48.160151 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fp56f" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="registry-server" probeResult="failure" output=< Mar 19 17:47:48 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:47:48 crc kubenswrapper[4792]: > Mar 19 17:47:57 crc kubenswrapper[4792]: I0319 17:47:57.138508 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:57 crc kubenswrapper[4792]: I0319 17:47:57.202875 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:57 crc kubenswrapper[4792]: I0319 17:47:57.380657 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fp56f"] Mar 19 17:47:58 crc kubenswrapper[4792]: I0319 17:47:58.430094 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fp56f" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="registry-server" containerID="cri-o://125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69" gracePeriod=2 Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.172869 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.212769 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-catalog-content\") pod \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.212957 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-utilities\") pod \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.212994 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f22l4\" (UniqueName: \"kubernetes.io/projected/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-kube-api-access-f22l4\") pod \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\" (UID: \"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c\") " Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.214897 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-utilities" (OuterVolumeSpecName: "utilities") pod "6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" (UID: "6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.229113 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-kube-api-access-f22l4" (OuterVolumeSpecName: "kube-api-access-f22l4") pod "6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" (UID: "6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c"). InnerVolumeSpecName "kube-api-access-f22l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.315384 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.315416 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f22l4\" (UniqueName: \"kubernetes.io/projected/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-kube-api-access-f22l4\") on node \"crc\" DevicePath \"\"" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.444734 4792 generic.go:334] "Generic (PLEG): container finished" podID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerID="125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69" exitCode=0 Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.444777 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp56f" event={"ID":"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c","Type":"ContainerDied","Data":"125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69"} Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.444802 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp56f" event={"ID":"6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c","Type":"ContainerDied","Data":"d97d490225428a095490a11f6a67fe76d384db90e955c47d480f59783819b46f"} Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.444817 4792 scope.go:117] "RemoveContainer" containerID="125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.444975 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp56f" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.466992 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" (UID: "6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.471786 4792 scope.go:117] "RemoveContainer" containerID="3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.512358 4792 scope.go:117] "RemoveContainer" containerID="cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.521559 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.574228 4792 scope.go:117] "RemoveContainer" containerID="125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69" Mar 19 17:47:59 crc kubenswrapper[4792]: E0319 17:47:59.574552 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69\": container with ID starting with 125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69 not found: ID does not exist" containerID="125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.574592 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69"} err="failed to get container status \"125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69\": rpc error: code = NotFound desc = could not find container \"125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69\": container with ID starting with 125b5a5f774a7cda5576dfba879cd11a7f3e32934b612e44b987a8169fd92a69 not found: ID does not exist" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.574620 4792 scope.go:117] "RemoveContainer" containerID="3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a" Mar 19 17:47:59 crc kubenswrapper[4792]: E0319 17:47:59.575138 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a\": container with ID starting with 3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a not found: ID does not exist" containerID="3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.575370 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a"} err="failed to get container status \"3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a\": rpc error: code = NotFound desc = could not find container \"3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a\": container with ID starting with 3bbbdd41da758ee9b95c7528f1a8396b1037fe5035564eed320b2c233380935a not found: ID does not exist" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.575566 4792 scope.go:117] "RemoveContainer" containerID="cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b" Mar 19 17:47:59 crc kubenswrapper[4792]: E0319 17:47:59.576205 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b\": container with ID starting with cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b not found: ID does not exist" containerID="cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.576615 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b"} err="failed to get container status \"cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b\": rpc error: code = NotFound desc = could not find container \"cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b\": container with ID starting with cb2f730f24068a74c5ee614302fe8190eed260af22d866bcbd2bf2aaab48154b not found: ID does not exist" Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.786263 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fp56f"] Mar 19 17:47:59 crc kubenswrapper[4792]: I0319 17:47:59.796138 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fp56f"] Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.293813 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565708-flg79"] Mar 19 17:48:00 crc kubenswrapper[4792]: E0319 17:48:00.294352 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="extract-content" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.294369 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="extract-content" Mar 19 17:48:00 crc kubenswrapper[4792]: E0319 17:48:00.294401 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="extract-utilities" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.294407 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="extract-utilities" Mar 19 17:48:00 crc kubenswrapper[4792]: E0319 17:48:00.294437 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="registry-server" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.294442 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="registry-server" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.294680 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" containerName="registry-server" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.295557 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565708-flg79" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.298214 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.298531 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.298660 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.315174 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565708-flg79"] Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.355607 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psctc\" (UniqueName: \"kubernetes.io/projected/0a7925ed-4bf9-4a06-a677-851abaec5d15-kube-api-access-psctc\") pod \"auto-csr-approver-29565708-flg79\" (UID: \"0a7925ed-4bf9-4a06-a677-851abaec5d15\") " pod="openshift-infra/auto-csr-approver-29565708-flg79" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.458029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psctc\" (UniqueName: \"kubernetes.io/projected/0a7925ed-4bf9-4a06-a677-851abaec5d15-kube-api-access-psctc\") pod \"auto-csr-approver-29565708-flg79\" (UID: \"0a7925ed-4bf9-4a06-a677-851abaec5d15\") " pod="openshift-infra/auto-csr-approver-29565708-flg79" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.479038 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psctc\" (UniqueName: \"kubernetes.io/projected/0a7925ed-4bf9-4a06-a677-851abaec5d15-kube-api-access-psctc\") pod \"auto-csr-approver-29565708-flg79\" (UID: \"0a7925ed-4bf9-4a06-a677-851abaec5d15\") " pod="openshift-infra/auto-csr-approver-29565708-flg79" Mar 19 17:48:00 crc kubenswrapper[4792]: I0319 17:48:00.640425 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565708-flg79" Mar 19 17:48:01 crc kubenswrapper[4792]: I0319 17:48:01.143355 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565708-flg79"] Mar 19 17:48:01 crc kubenswrapper[4792]: I0319 17:48:01.466278 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565708-flg79" event={"ID":"0a7925ed-4bf9-4a06-a677-851abaec5d15","Type":"ContainerStarted","Data":"25b0500ff00faa226703b20e9b0321ba47be62310ada5895ae00073f27962a23"} Mar 19 17:48:01 crc kubenswrapper[4792]: I0319 17:48:01.750979 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c" path="/var/lib/kubelet/pods/6714e5e9-ddb6-46f9-8d8a-7767eafe1f4c/volumes" Mar 19 17:48:02 crc kubenswrapper[4792]: I0319 17:48:02.480711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565708-flg79" event={"ID":"0a7925ed-4bf9-4a06-a677-851abaec5d15","Type":"ContainerStarted","Data":"7a58550cb1d6f7c5cdbaa66d9d4f9472118bb7558838b78603c35090dc1830db"} Mar 19 17:48:02 crc kubenswrapper[4792]: I0319 17:48:02.508894 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565708-flg79" podStartSLOduration=1.650949563 podStartE2EDuration="2.508873011s" podCreationTimestamp="2026-03-19 17:48:00 +0000 UTC" firstStartedPulling="2026-03-19 17:48:01.158070757 +0000 UTC m=+4044.304128297" lastFinishedPulling="2026-03-19 17:48:02.015994205 +0000 UTC m=+4045.162051745" observedRunningTime="2026-03-19 17:48:02.503302738 +0000 UTC m=+4045.649360278" watchObservedRunningTime="2026-03-19 17:48:02.508873011 +0000 UTC m=+4045.654930551" Mar 19 17:48:04 crc kubenswrapper[4792]: I0319 17:48:04.505088 4792 generic.go:334] "Generic (PLEG): container finished" podID="0a7925ed-4bf9-4a06-a677-851abaec5d15" containerID="7a58550cb1d6f7c5cdbaa66d9d4f9472118bb7558838b78603c35090dc1830db" exitCode=0 Mar 19 17:48:04 crc kubenswrapper[4792]: I0319 17:48:04.505193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565708-flg79" event={"ID":"0a7925ed-4bf9-4a06-a677-851abaec5d15","Type":"ContainerDied","Data":"7a58550cb1d6f7c5cdbaa66d9d4f9472118bb7558838b78603c35090dc1830db"} Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.371522 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565708-flg79" Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.427576 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psctc\" (UniqueName: \"kubernetes.io/projected/0a7925ed-4bf9-4a06-a677-851abaec5d15-kube-api-access-psctc\") pod \"0a7925ed-4bf9-4a06-a677-851abaec5d15\" (UID: \"0a7925ed-4bf9-4a06-a677-851abaec5d15\") " Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.462429 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7925ed-4bf9-4a06-a677-851abaec5d15-kube-api-access-psctc" (OuterVolumeSpecName: "kube-api-access-psctc") pod "0a7925ed-4bf9-4a06-a677-851abaec5d15" (UID: "0a7925ed-4bf9-4a06-a677-851abaec5d15"). InnerVolumeSpecName "kube-api-access-psctc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.530860 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psctc\" (UniqueName: \"kubernetes.io/projected/0a7925ed-4bf9-4a06-a677-851abaec5d15-kube-api-access-psctc\") on node \"crc\" DevicePath \"\"" Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.535816 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565708-flg79" event={"ID":"0a7925ed-4bf9-4a06-a677-851abaec5d15","Type":"ContainerDied","Data":"25b0500ff00faa226703b20e9b0321ba47be62310ada5895ae00073f27962a23"} Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.535953 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25b0500ff00faa226703b20e9b0321ba47be62310ada5895ae00073f27962a23" Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.536015 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565708-flg79" Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.597176 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565702-96zpc"] Mar 19 17:48:06 crc kubenswrapper[4792]: I0319 17:48:06.608854 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565702-96zpc"] Mar 19 17:48:07 crc kubenswrapper[4792]: I0319 17:48:07.759092 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20942f5e-95d8-43c8-94a8-d271f9685020" path="/var/lib/kubelet/pods/20942f5e-95d8-43c8-94a8-d271f9685020/volumes" Mar 19 17:48:08 crc kubenswrapper[4792]: I0319 17:48:08.783386 4792 scope.go:117] "RemoveContainer" containerID="06a5a18e1e09d1ec53192667f2e20264cc76ce12f9ae4fbb749ef504ee9ffcee" Mar 19 17:48:52 crc kubenswrapper[4792]: I0319 17:48:52.973160 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-llv4x"] Mar 19 17:48:52 crc kubenswrapper[4792]: E0319 17:48:52.974441 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7925ed-4bf9-4a06-a677-851abaec5d15" containerName="oc" Mar 19 17:48:52 crc kubenswrapper[4792]: I0319 17:48:52.974458 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7925ed-4bf9-4a06-a677-851abaec5d15" containerName="oc" Mar 19 17:48:52 crc kubenswrapper[4792]: I0319 17:48:52.974756 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7925ed-4bf9-4a06-a677-851abaec5d15" containerName="oc" Mar 19 17:48:52 crc kubenswrapper[4792]: I0319 17:48:52.977066 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.000816 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llv4x"] Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.060215 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8mnb\" (UniqueName: \"kubernetes.io/projected/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-kube-api-access-f8mnb\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.060596 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-utilities\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.060648 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-catalog-content\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.169958 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-utilities\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.170113 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-catalog-content\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.170556 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8mnb\" (UniqueName: \"kubernetes.io/projected/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-kube-api-access-f8mnb\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.171416 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-utilities\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.171519 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-catalog-content\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.191218 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8mnb\" (UniqueName: \"kubernetes.io/projected/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-kube-api-access-f8mnb\") pod \"certified-operators-llv4x\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.309673 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:48:53 crc kubenswrapper[4792]: I0319 17:48:53.900241 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llv4x"] Mar 19 17:48:54 crc kubenswrapper[4792]: I0319 17:48:54.153710 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llv4x" event={"ID":"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b","Type":"ContainerStarted","Data":"873c71ed31cf36ca7b244d387ca73077b0af4e4ddfeae3387a9dc8e02b6485ca"} Mar 19 17:48:55 crc kubenswrapper[4792]: I0319 17:48:55.166094 4792 generic.go:334] "Generic (PLEG): container finished" podID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerID="cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8" exitCode=0 Mar 19 17:48:55 crc kubenswrapper[4792]: I0319 17:48:55.166392 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llv4x" event={"ID":"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b","Type":"ContainerDied","Data":"cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8"} Mar 19 17:48:56 crc kubenswrapper[4792]: I0319 17:48:56.184783 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llv4x" event={"ID":"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b","Type":"ContainerStarted","Data":"5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78"} Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.156124 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-99qcv"] Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.158700 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.218429 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99qcv"] Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.314255 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-utilities\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.314624 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7l8f\" (UniqueName: \"kubernetes.io/projected/4f03915c-294c-49c8-aeae-80f704a3ba0c-kube-api-access-s7l8f\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.314831 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-catalog-content\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.417172 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7l8f\" (UniqueName: \"kubernetes.io/projected/4f03915c-294c-49c8-aeae-80f704a3ba0c-kube-api-access-s7l8f\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.417681 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-catalog-content\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.417940 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-utilities\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.418153 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-catalog-content\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.418260 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-utilities\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.436864 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7l8f\" (UniqueName: \"kubernetes.io/projected/4f03915c-294c-49c8-aeae-80f704a3ba0c-kube-api-access-s7l8f\") pod \"community-operators-99qcv\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:57 crc kubenswrapper[4792]: I0319 17:48:57.505501 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:48:58 crc kubenswrapper[4792]: I0319 17:48:58.151654 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99qcv"] Mar 19 17:48:58 crc kubenswrapper[4792]: W0319 17:48:58.173752 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f03915c_294c_49c8_aeae_80f704a3ba0c.slice/crio-7495c093ca174387e04c2e607789978fe17158844591985645f489c209945c2f WatchSource:0}: Error finding container 7495c093ca174387e04c2e607789978fe17158844591985645f489c209945c2f: Status 404 returned error can't find the container with id 7495c093ca174387e04c2e607789978fe17158844591985645f489c209945c2f Mar 19 17:48:58 crc kubenswrapper[4792]: I0319 17:48:58.251610 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99qcv" event={"ID":"4f03915c-294c-49c8-aeae-80f704a3ba0c","Type":"ContainerStarted","Data":"7495c093ca174387e04c2e607789978fe17158844591985645f489c209945c2f"} Mar 19 17:48:58 crc kubenswrapper[4792]: I0319 17:48:58.254185 4792 generic.go:334] "Generic (PLEG): container finished" podID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerID="5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78" exitCode=0 Mar 19 17:48:58 crc kubenswrapper[4792]: I0319 17:48:58.254214 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llv4x" event={"ID":"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b","Type":"ContainerDied","Data":"5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78"} Mar 19 17:48:59 crc kubenswrapper[4792]: I0319 17:48:59.269943 4792 generic.go:334] "Generic (PLEG): container finished" podID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerID="a1575c46956146110546322c69101cac25e64acc89b2ed52c3cc46a2aa2d397c" exitCode=0 Mar 19 17:48:59 crc kubenswrapper[4792]: I0319 17:48:59.269999 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99qcv" event={"ID":"4f03915c-294c-49c8-aeae-80f704a3ba0c","Type":"ContainerDied","Data":"a1575c46956146110546322c69101cac25e64acc89b2ed52c3cc46a2aa2d397c"} Mar 19 17:49:00 crc kubenswrapper[4792]: I0319 17:49:00.283600 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llv4x" event={"ID":"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b","Type":"ContainerStarted","Data":"a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09"} Mar 19 17:49:00 crc kubenswrapper[4792]: I0319 17:49:00.312785 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-llv4x" podStartSLOduration=4.827912049 podStartE2EDuration="8.312764976s" podCreationTimestamp="2026-03-19 17:48:52 +0000 UTC" firstStartedPulling="2026-03-19 17:48:55.168734348 +0000 UTC m=+4098.314791918" lastFinishedPulling="2026-03-19 17:48:58.653587305 +0000 UTC m=+4101.799644845" observedRunningTime="2026-03-19 17:49:00.311580033 +0000 UTC m=+4103.457637573" watchObservedRunningTime="2026-03-19 17:49:00.312764976 +0000 UTC m=+4103.458822516" Mar 19 17:49:01 crc kubenswrapper[4792]: I0319 17:49:01.298282 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99qcv" event={"ID":"4f03915c-294c-49c8-aeae-80f704a3ba0c","Type":"ContainerStarted","Data":"f24c75195fd32d87f458556c2ebb1d7e979db61d24b2f34db3194d4c981b7682"} Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.310095 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.310371 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.356561 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.834272 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q62c8"] Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.837269 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.864282 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q62c8"] Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.933711 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-catalog-content\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.933914 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-utilities\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:03 crc kubenswrapper[4792]: I0319 17:49:03.933985 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ftml\" (UniqueName: \"kubernetes.io/projected/008fec2e-b94d-457f-9422-e8715ff85e42-kube-api-access-6ftml\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.036251 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-utilities\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.036417 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftml\" (UniqueName: \"kubernetes.io/projected/008fec2e-b94d-457f-9422-e8715ff85e42-kube-api-access-6ftml\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.036539 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-catalog-content\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.048011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-utilities\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.048161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-catalog-content\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.065159 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftml\" (UniqueName: \"kubernetes.io/projected/008fec2e-b94d-457f-9422-e8715ff85e42-kube-api-access-6ftml\") pod \"redhat-marketplace-q62c8\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.162540 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.388171 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:49:04 crc kubenswrapper[4792]: I0319 17:49:04.793098 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q62c8"] Mar 19 17:49:05 crc kubenswrapper[4792]: I0319 17:49:05.282662 4792 generic.go:334] "Generic (PLEG): container finished" podID="008fec2e-b94d-457f-9422-e8715ff85e42" containerID="cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c" exitCode=0 Mar 19 17:49:05 crc kubenswrapper[4792]: I0319 17:49:05.282726 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q62c8" event={"ID":"008fec2e-b94d-457f-9422-e8715ff85e42","Type":"ContainerDied","Data":"cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c"} Mar 19 17:49:05 crc kubenswrapper[4792]: I0319 17:49:05.283142 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q62c8" event={"ID":"008fec2e-b94d-457f-9422-e8715ff85e42","Type":"ContainerStarted","Data":"9ffe527f230cc6afdc1858a869aef6fe40de9c37de5163a54597a3ec93dc058d"} Mar 19 17:49:05 crc kubenswrapper[4792]: I0319 17:49:05.285468 4792 generic.go:334] "Generic (PLEG): container finished" podID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerID="f24c75195fd32d87f458556c2ebb1d7e979db61d24b2f34db3194d4c981b7682" exitCode=0 Mar 19 17:49:05 crc kubenswrapper[4792]: I0319 17:49:05.286859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99qcv" event={"ID":"4f03915c-294c-49c8-aeae-80f704a3ba0c","Type":"ContainerDied","Data":"f24c75195fd32d87f458556c2ebb1d7e979db61d24b2f34db3194d4c981b7682"} Mar 19 17:49:06 crc kubenswrapper[4792]: I0319 17:49:06.335211 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llv4x"] Mar 19 17:49:06 crc kubenswrapper[4792]: I0319 17:49:06.335995 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-llv4x" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerName="registry-server" containerID="cri-o://a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09" gracePeriod=2 Mar 19 17:49:06 crc kubenswrapper[4792]: I0319 17:49:06.965959 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.132693 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8mnb\" (UniqueName: \"kubernetes.io/projected/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-kube-api-access-f8mnb\") pod \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.132931 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-catalog-content\") pod \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.132996 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-utilities\") pod \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\" (UID: \"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b\") " Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.134257 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-utilities" (OuterVolumeSpecName: "utilities") pod "eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" (UID: "eceb0b35-ff75-4aa5-8cb3-24cd93c4732b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.138101 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-kube-api-access-f8mnb" (OuterVolumeSpecName: "kube-api-access-f8mnb") pod "eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" (UID: "eceb0b35-ff75-4aa5-8cb3-24cd93c4732b"). InnerVolumeSpecName "kube-api-access-f8mnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.188228 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" (UID: "eceb0b35-ff75-4aa5-8cb3-24cd93c4732b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.235469 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8mnb\" (UniqueName: \"kubernetes.io/projected/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-kube-api-access-f8mnb\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.235499 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.235510 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.333375 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q62c8" event={"ID":"008fec2e-b94d-457f-9422-e8715ff85e42","Type":"ContainerStarted","Data":"3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6"} Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.338858 4792 generic.go:334] "Generic (PLEG): container finished" podID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerID="a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09" exitCode=0 Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.338940 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llv4x" event={"ID":"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b","Type":"ContainerDied","Data":"a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09"} Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.338970 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llv4x" event={"ID":"eceb0b35-ff75-4aa5-8cb3-24cd93c4732b","Type":"ContainerDied","Data":"873c71ed31cf36ca7b244d387ca73077b0af4e4ddfeae3387a9dc8e02b6485ca"} Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.338976 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llv4x" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.338987 4792 scope.go:117] "RemoveContainer" containerID="a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.344428 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99qcv" event={"ID":"4f03915c-294c-49c8-aeae-80f704a3ba0c","Type":"ContainerStarted","Data":"276ce8e1becf6ec7783efb815a1de93338c26504b0afa482a543a2ef5459898f"} Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.367003 4792 scope.go:117] "RemoveContainer" containerID="5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.393203 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-99qcv" podStartSLOduration=3.522597822 podStartE2EDuration="10.393184002s" podCreationTimestamp="2026-03-19 17:48:57 +0000 UTC" firstStartedPulling="2026-03-19 17:48:59.272719841 +0000 UTC m=+4102.418777381" lastFinishedPulling="2026-03-19 17:49:06.143306011 +0000 UTC m=+4109.289363561" observedRunningTime="2026-03-19 17:49:07.376768082 +0000 UTC m=+4110.522825662" watchObservedRunningTime="2026-03-19 17:49:07.393184002 +0000 UTC m=+4110.539241552" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.409037 4792 scope.go:117] "RemoveContainer" containerID="cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.424076 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llv4x"] Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.438974 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-llv4x"] Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.447327 4792 scope.go:117] "RemoveContainer" containerID="a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09" Mar 19 17:49:07 crc kubenswrapper[4792]: E0319 17:49:07.447773 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09\": container with ID starting with a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09 not found: ID does not exist" containerID="a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.447801 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09"} err="failed to get container status \"a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09\": rpc error: code = NotFound desc = could not find container \"a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09\": container with ID starting with a9c83ce7dd60be448ceb4bf15b8e690cd3b809863adfa97d149627616085ee09 not found: ID does not exist" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.447822 4792 scope.go:117] "RemoveContainer" containerID="5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78" Mar 19 17:49:07 crc kubenswrapper[4792]: E0319 17:49:07.448133 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78\": container with ID starting with 5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78 not found: ID does not exist" containerID="5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.448157 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78"} err="failed to get container status \"5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78\": rpc error: code = NotFound desc = could not find container \"5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78\": container with ID starting with 5b8483e1e548d5074dc4f7227c9db332bc14d24df29da80b92dec37191e05a78 not found: ID does not exist" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.448172 4792 scope.go:117] "RemoveContainer" containerID="cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8" Mar 19 17:49:07 crc kubenswrapper[4792]: E0319 17:49:07.448583 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8\": container with ID starting with cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8 not found: ID does not exist" containerID="cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.448627 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8"} err="failed to get container status \"cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8\": rpc error: code = NotFound desc = could not find container \"cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8\": container with ID starting with cc6de2c963b88e5eeb416487242855a267c0df12cba8787f7cc14b84d204a3d8 not found: ID does not exist" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.506506 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.506556 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:49:07 crc kubenswrapper[4792]: I0319 17:49:07.763148 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" path="/var/lib/kubelet/pods/eceb0b35-ff75-4aa5-8cb3-24cd93c4732b/volumes" Mar 19 17:49:08 crc kubenswrapper[4792]: I0319 17:49:08.358465 4792 generic.go:334] "Generic (PLEG): container finished" podID="008fec2e-b94d-457f-9422-e8715ff85e42" containerID="3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6" exitCode=0 Mar 19 17:49:08 crc kubenswrapper[4792]: I0319 17:49:08.358516 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q62c8" event={"ID":"008fec2e-b94d-457f-9422-e8715ff85e42","Type":"ContainerDied","Data":"3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6"} Mar 19 17:49:08 crc kubenswrapper[4792]: I0319 17:49:08.562781 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-99qcv" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="registry-server" probeResult="failure" output=< Mar 19 17:49:08 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:49:08 crc kubenswrapper[4792]: > Mar 19 17:49:09 crc kubenswrapper[4792]: I0319 17:49:09.372410 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q62c8" event={"ID":"008fec2e-b94d-457f-9422-e8715ff85e42","Type":"ContainerStarted","Data":"2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88"} Mar 19 17:49:09 crc kubenswrapper[4792]: I0319 17:49:09.397252 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q62c8" podStartSLOduration=2.940498677 podStartE2EDuration="6.397230372s" podCreationTimestamp="2026-03-19 17:49:03 +0000 UTC" firstStartedPulling="2026-03-19 17:49:05.287919372 +0000 UTC m=+4108.433976932" lastFinishedPulling="2026-03-19 17:49:08.744651086 +0000 UTC m=+4111.890708627" observedRunningTime="2026-03-19 17:49:09.388762109 +0000 UTC m=+4112.534819689" watchObservedRunningTime="2026-03-19 17:49:09.397230372 +0000 UTC m=+4112.543287912" Mar 19 17:49:14 crc kubenswrapper[4792]: I0319 17:49:14.163402 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:14 crc kubenswrapper[4792]: I0319 17:49:14.164554 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:14 crc kubenswrapper[4792]: I0319 17:49:14.242707 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:14 crc kubenswrapper[4792]: I0319 17:49:14.505104 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:14 crc kubenswrapper[4792]: I0319 17:49:14.575636 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q62c8"] Mar 19 17:49:16 crc kubenswrapper[4792]: I0319 17:49:16.456300 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q62c8" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" containerName="registry-server" containerID="cri-o://2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88" gracePeriod=2 Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.028511 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.092750 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ftml\" (UniqueName: \"kubernetes.io/projected/008fec2e-b94d-457f-9422-e8715ff85e42-kube-api-access-6ftml\") pod \"008fec2e-b94d-457f-9422-e8715ff85e42\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.092801 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-catalog-content\") pod \"008fec2e-b94d-457f-9422-e8715ff85e42\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.093211 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-utilities\") pod \"008fec2e-b94d-457f-9422-e8715ff85e42\" (UID: \"008fec2e-b94d-457f-9422-e8715ff85e42\") " Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.094057 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-utilities" (OuterVolumeSpecName: "utilities") pod "008fec2e-b94d-457f-9422-e8715ff85e42" (UID: "008fec2e-b94d-457f-9422-e8715ff85e42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.098886 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008fec2e-b94d-457f-9422-e8715ff85e42-kube-api-access-6ftml" (OuterVolumeSpecName: "kube-api-access-6ftml") pod "008fec2e-b94d-457f-9422-e8715ff85e42" (UID: "008fec2e-b94d-457f-9422-e8715ff85e42"). InnerVolumeSpecName "kube-api-access-6ftml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.119148 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008fec2e-b94d-457f-9422-e8715ff85e42" (UID: "008fec2e-b94d-457f-9422-e8715ff85e42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.196009 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.196039 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ftml\" (UniqueName: \"kubernetes.io/projected/008fec2e-b94d-457f-9422-e8715ff85e42-kube-api-access-6ftml\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.196052 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008fec2e-b94d-457f-9422-e8715ff85e42-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.472236 4792 generic.go:334] "Generic (PLEG): container finished" podID="008fec2e-b94d-457f-9422-e8715ff85e42" containerID="2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88" exitCode=0 Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.472331 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q62c8" event={"ID":"008fec2e-b94d-457f-9422-e8715ff85e42","Type":"ContainerDied","Data":"2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88"} Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.472575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q62c8" event={"ID":"008fec2e-b94d-457f-9422-e8715ff85e42","Type":"ContainerDied","Data":"9ffe527f230cc6afdc1858a869aef6fe40de9c37de5163a54597a3ec93dc058d"} Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.472601 4792 scope.go:117] "RemoveContainer" containerID="2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.472415 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q62c8" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.503531 4792 scope.go:117] "RemoveContainer" containerID="3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.530014 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q62c8"] Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.545255 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q62c8"] Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.553962 4792 scope.go:117] "RemoveContainer" containerID="cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.585501 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.587940 4792 scope.go:117] "RemoveContainer" containerID="2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88" Mar 19 17:49:17 crc kubenswrapper[4792]: E0319 17:49:17.588370 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88\": container with ID starting with 2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88 not found: ID does not exist" containerID="2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.588439 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88"} err="failed to get container status \"2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88\": rpc error: code = NotFound desc = could not find container \"2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88\": container with ID starting with 2e389a0075404551a907e210b832a52161b90db3c8270f9fc22b5877d0ee0e88 not found: ID does not exist" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.588467 4792 scope.go:117] "RemoveContainer" containerID="3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6" Mar 19 17:49:17 crc kubenswrapper[4792]: E0319 17:49:17.590788 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6\": container with ID starting with 3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6 not found: ID does not exist" containerID="3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.590867 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6"} err="failed to get container status \"3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6\": rpc error: code = NotFound desc = could not find container \"3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6\": container with ID starting with 3afac1f9972a91a0308f29c6c5bf3fd2ff0f2b238bc0ffa907f69108bc48a1a6 not found: ID does not exist" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.590892 4792 scope.go:117] "RemoveContainer" containerID="cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c" Mar 19 17:49:17 crc kubenswrapper[4792]: E0319 17:49:17.591449 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c\": container with ID starting with cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c not found: ID does not exist" containerID="cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.591501 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c"} err="failed to get container status \"cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c\": rpc error: code = NotFound desc = could not find container \"cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c\": container with ID starting with cd331192af28f41874c3084c55a40e357cf4d7c8f607ac1d17bb09dc4a4ff60c not found: ID does not exist" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.644302 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:49:17 crc kubenswrapper[4792]: I0319 17:49:17.760590 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" path="/var/lib/kubelet/pods/008fec2e-b94d-457f-9422-e8715ff85e42/volumes" Mar 19 17:49:19 crc kubenswrapper[4792]: I0319 17:49:19.894697 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99qcv"] Mar 19 17:49:19 crc kubenswrapper[4792]: I0319 17:49:19.895460 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-99qcv" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="registry-server" containerID="cri-o://276ce8e1becf6ec7783efb815a1de93338c26504b0afa482a543a2ef5459898f" gracePeriod=2 Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.231071 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.231434 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.507014 4792 generic.go:334] "Generic (PLEG): container finished" podID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerID="276ce8e1becf6ec7783efb815a1de93338c26504b0afa482a543a2ef5459898f" exitCode=0 Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.507062 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99qcv" event={"ID":"4f03915c-294c-49c8-aeae-80f704a3ba0c","Type":"ContainerDied","Data":"276ce8e1becf6ec7783efb815a1de93338c26504b0afa482a543a2ef5459898f"} Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.761725 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.911786 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-catalog-content\") pod \"4f03915c-294c-49c8-aeae-80f704a3ba0c\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.911903 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7l8f\" (UniqueName: \"kubernetes.io/projected/4f03915c-294c-49c8-aeae-80f704a3ba0c-kube-api-access-s7l8f\") pod \"4f03915c-294c-49c8-aeae-80f704a3ba0c\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.912006 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-utilities\") pod \"4f03915c-294c-49c8-aeae-80f704a3ba0c\" (UID: \"4f03915c-294c-49c8-aeae-80f704a3ba0c\") " Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.914058 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-utilities" (OuterVolumeSpecName: "utilities") pod "4f03915c-294c-49c8-aeae-80f704a3ba0c" (UID: "4f03915c-294c-49c8-aeae-80f704a3ba0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.918912 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f03915c-294c-49c8-aeae-80f704a3ba0c-kube-api-access-s7l8f" (OuterVolumeSpecName: "kube-api-access-s7l8f") pod "4f03915c-294c-49c8-aeae-80f704a3ba0c" (UID: "4f03915c-294c-49c8-aeae-80f704a3ba0c"). InnerVolumeSpecName "kube-api-access-s7l8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:49:20 crc kubenswrapper[4792]: I0319 17:49:20.959551 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f03915c-294c-49c8-aeae-80f704a3ba0c" (UID: "4f03915c-294c-49c8-aeae-80f704a3ba0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.014993 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7l8f\" (UniqueName: \"kubernetes.io/projected/4f03915c-294c-49c8-aeae-80f704a3ba0c-kube-api-access-s7l8f\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.015253 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.015335 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f03915c-294c-49c8-aeae-80f704a3ba0c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.523892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99qcv" event={"ID":"4f03915c-294c-49c8-aeae-80f704a3ba0c","Type":"ContainerDied","Data":"7495c093ca174387e04c2e607789978fe17158844591985645f489c209945c2f"} Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.524397 4792 scope.go:117] "RemoveContainer" containerID="276ce8e1becf6ec7783efb815a1de93338c26504b0afa482a543a2ef5459898f" Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.523928 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99qcv" Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.569361 4792 scope.go:117] "RemoveContainer" containerID="f24c75195fd32d87f458556c2ebb1d7e979db61d24b2f34db3194d4c981b7682" Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.578659 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99qcv"] Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.594598 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-99qcv"] Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.604956 4792 scope.go:117] "RemoveContainer" containerID="a1575c46956146110546322c69101cac25e64acc89b2ed52c3cc46a2aa2d397c" Mar 19 17:49:21 crc kubenswrapper[4792]: I0319 17:49:21.761912 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" path="/var/lib/kubelet/pods/4f03915c-294c-49c8-aeae-80f704a3ba0c/volumes" Mar 19 17:49:50 crc kubenswrapper[4792]: I0319 17:49:50.230597 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:49:50 crc kubenswrapper[4792]: I0319 17:49:50.231169 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.148346 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565710-7sqw9"] Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.150123 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.150204 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.150276 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerName="extract-content" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.150330 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerName="extract-content" Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.150398 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.150451 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.150513 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="extract-content" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.150580 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="extract-content" Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.150648 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" containerName="extract-content" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.150701 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" containerName="extract-content" Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.150755 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.150810 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.150895 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerName="extract-utilities" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.150948 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerName="extract-utilities" Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.151006 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="extract-utilities" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.151060 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="extract-utilities" Mar 19 17:50:00 crc kubenswrapper[4792]: E0319 17:50:00.151120 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" containerName="extract-utilities" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.151169 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" containerName="extract-utilities" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.151420 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f03915c-294c-49c8-aeae-80f704a3ba0c" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.151485 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="eceb0b35-ff75-4aa5-8cb3-24cd93c4732b" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.151539 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="008fec2e-b94d-457f-9422-e8715ff85e42" containerName="registry-server" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.152426 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565710-7sqw9" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.160567 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.160767 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.160864 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.162823 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565710-7sqw9"] Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.239138 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m557n\" (UniqueName: \"kubernetes.io/projected/9e6c2c75-b252-43cc-8e2c-063daf3bf3dc-kube-api-access-m557n\") pod \"auto-csr-approver-29565710-7sqw9\" (UID: \"9e6c2c75-b252-43cc-8e2c-063daf3bf3dc\") " pod="openshift-infra/auto-csr-approver-29565710-7sqw9" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.341287 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m557n\" (UniqueName: \"kubernetes.io/projected/9e6c2c75-b252-43cc-8e2c-063daf3bf3dc-kube-api-access-m557n\") pod \"auto-csr-approver-29565710-7sqw9\" (UID: \"9e6c2c75-b252-43cc-8e2c-063daf3bf3dc\") " pod="openshift-infra/auto-csr-approver-29565710-7sqw9" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.363305 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m557n\" (UniqueName: \"kubernetes.io/projected/9e6c2c75-b252-43cc-8e2c-063daf3bf3dc-kube-api-access-m557n\") pod \"auto-csr-approver-29565710-7sqw9\" (UID: \"9e6c2c75-b252-43cc-8e2c-063daf3bf3dc\") " pod="openshift-infra/auto-csr-approver-29565710-7sqw9" Mar 19 17:50:00 crc kubenswrapper[4792]: I0319 17:50:00.480853 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565710-7sqw9" Mar 19 17:50:01 crc kubenswrapper[4792]: I0319 17:50:01.025696 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565710-7sqw9"] Mar 19 17:50:02 crc kubenswrapper[4792]: I0319 17:50:02.025864 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565710-7sqw9" event={"ID":"9e6c2c75-b252-43cc-8e2c-063daf3bf3dc","Type":"ContainerStarted","Data":"97bcb589f44f400d6b53ac48f4bd9c5081a81495043b1bd48bf2690af763c5bc"} Mar 19 17:50:04 crc kubenswrapper[4792]: I0319 17:50:04.457284 4792 generic.go:334] "Generic (PLEG): container finished" podID="9e6c2c75-b252-43cc-8e2c-063daf3bf3dc" containerID="bff42a3e55ff12662156279fafb0dae3b4eccc9d861438c2209d60c4f22bae7a" exitCode=0 Mar 19 17:50:04 crc kubenswrapper[4792]: I0319 17:50:04.457810 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565710-7sqw9" event={"ID":"9e6c2c75-b252-43cc-8e2c-063daf3bf3dc","Type":"ContainerDied","Data":"bff42a3e55ff12662156279fafb0dae3b4eccc9d861438c2209d60c4f22bae7a"} Mar 19 17:50:05 crc kubenswrapper[4792]: I0319 17:50:05.900762 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565710-7sqw9" Mar 19 17:50:05 crc kubenswrapper[4792]: I0319 17:50:05.973550 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m557n\" (UniqueName: \"kubernetes.io/projected/9e6c2c75-b252-43cc-8e2c-063daf3bf3dc-kube-api-access-m557n\") pod \"9e6c2c75-b252-43cc-8e2c-063daf3bf3dc\" (UID: \"9e6c2c75-b252-43cc-8e2c-063daf3bf3dc\") " Mar 19 17:50:05 crc kubenswrapper[4792]: I0319 17:50:05.978948 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6c2c75-b252-43cc-8e2c-063daf3bf3dc-kube-api-access-m557n" (OuterVolumeSpecName: "kube-api-access-m557n") pod "9e6c2c75-b252-43cc-8e2c-063daf3bf3dc" (UID: "9e6c2c75-b252-43cc-8e2c-063daf3bf3dc"). InnerVolumeSpecName "kube-api-access-m557n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:50:06 crc kubenswrapper[4792]: I0319 17:50:06.077169 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m557n\" (UniqueName: \"kubernetes.io/projected/9e6c2c75-b252-43cc-8e2c-063daf3bf3dc-kube-api-access-m557n\") on node \"crc\" DevicePath \"\"" Mar 19 17:50:06 crc kubenswrapper[4792]: I0319 17:50:06.485126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565710-7sqw9" event={"ID":"9e6c2c75-b252-43cc-8e2c-063daf3bf3dc","Type":"ContainerDied","Data":"97bcb589f44f400d6b53ac48f4bd9c5081a81495043b1bd48bf2690af763c5bc"} Mar 19 17:50:06 crc kubenswrapper[4792]: I0319 17:50:06.485340 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97bcb589f44f400d6b53ac48f4bd9c5081a81495043b1bd48bf2690af763c5bc" Mar 19 17:50:06 crc kubenswrapper[4792]: I0319 17:50:06.485390 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565710-7sqw9" Mar 19 17:50:06 crc kubenswrapper[4792]: I0319 17:50:06.971720 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565704-2hz6q"] Mar 19 17:50:06 crc kubenswrapper[4792]: I0319 17:50:06.985053 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565704-2hz6q"] Mar 19 17:50:07 crc kubenswrapper[4792]: I0319 17:50:07.796335 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a88334b-b09f-4cce-b4d0-bb07253f1223" path="/var/lib/kubelet/pods/3a88334b-b09f-4cce-b4d0-bb07253f1223/volumes" Mar 19 17:50:09 crc kubenswrapper[4792]: I0319 17:50:09.017618 4792 scope.go:117] "RemoveContainer" containerID="ff04eb66a4ee57689a03d149feef8e2e54736d429124ca514b201582b2420b24" Mar 19 17:50:20 crc kubenswrapper[4792]: I0319 17:50:20.230782 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:50:20 crc kubenswrapper[4792]: I0319 17:50:20.231452 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:50:20 crc kubenswrapper[4792]: I0319 17:50:20.231506 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:50:20 crc kubenswrapper[4792]: I0319 17:50:20.232560 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edd4f1b44628e421771b3f63852beda9bb33be34a68db36d09092c01b872cad9"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:50:20 crc kubenswrapper[4792]: I0319 17:50:20.232627 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://edd4f1b44628e421771b3f63852beda9bb33be34a68db36d09092c01b872cad9" gracePeriod=600 Mar 19 17:50:20 crc kubenswrapper[4792]: I0319 17:50:20.641805 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="edd4f1b44628e421771b3f63852beda9bb33be34a68db36d09092c01b872cad9" exitCode=0 Mar 19 17:50:20 crc kubenswrapper[4792]: I0319 17:50:20.641911 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"edd4f1b44628e421771b3f63852beda9bb33be34a68db36d09092c01b872cad9"} Mar 19 17:50:20 crc kubenswrapper[4792]: I0319 17:50:20.642186 4792 scope.go:117] "RemoveContainer" containerID="363329b491510346794d8cdc2a061687301fad45176170068f80fcd9731965f9" Mar 19 17:50:21 crc kubenswrapper[4792]: I0319 17:50:21.654115 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77"} Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.162803 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565712-gxqhr"] Mar 19 17:52:00 crc kubenswrapper[4792]: E0319 17:52:00.164199 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6c2c75-b252-43cc-8e2c-063daf3bf3dc" containerName="oc" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.164217 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6c2c75-b252-43cc-8e2c-063daf3bf3dc" containerName="oc" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.164522 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6c2c75-b252-43cc-8e2c-063daf3bf3dc" containerName="oc" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.165639 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565712-gxqhr" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.168106 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.169446 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.170021 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.195993 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565712-gxqhr"] Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.221124 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsv9b\" (UniqueName: \"kubernetes.io/projected/46b21e25-b24c-4432-ac3f-b67eeee56c2b-kube-api-access-fsv9b\") pod \"auto-csr-approver-29565712-gxqhr\" (UID: \"46b21e25-b24c-4432-ac3f-b67eeee56c2b\") " pod="openshift-infra/auto-csr-approver-29565712-gxqhr" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.323156 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsv9b\" (UniqueName: \"kubernetes.io/projected/46b21e25-b24c-4432-ac3f-b67eeee56c2b-kube-api-access-fsv9b\") pod \"auto-csr-approver-29565712-gxqhr\" (UID: \"46b21e25-b24c-4432-ac3f-b67eeee56c2b\") " pod="openshift-infra/auto-csr-approver-29565712-gxqhr" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.347474 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsv9b\" (UniqueName: \"kubernetes.io/projected/46b21e25-b24c-4432-ac3f-b67eeee56c2b-kube-api-access-fsv9b\") pod \"auto-csr-approver-29565712-gxqhr\" (UID: \"46b21e25-b24c-4432-ac3f-b67eeee56c2b\") " pod="openshift-infra/auto-csr-approver-29565712-gxqhr" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.489046 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565712-gxqhr" Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.970531 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565712-gxqhr"] Mar 19 17:52:00 crc kubenswrapper[4792]: W0319 17:52:00.972147 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46b21e25_b24c_4432_ac3f_b67eeee56c2b.slice/crio-7ab268f65854f87815c7a4b1a884dc77e5330009e8142ad8f5797a3fb58b4dbf WatchSource:0}: Error finding container 7ab268f65854f87815c7a4b1a884dc77e5330009e8142ad8f5797a3fb58b4dbf: Status 404 returned error can't find the container with id 7ab268f65854f87815c7a4b1a884dc77e5330009e8142ad8f5797a3fb58b4dbf Mar 19 17:52:00 crc kubenswrapper[4792]: I0319 17:52:00.979723 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:52:01 crc kubenswrapper[4792]: I0319 17:52:01.940357 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565712-gxqhr" event={"ID":"46b21e25-b24c-4432-ac3f-b67eeee56c2b","Type":"ContainerStarted","Data":"7ab268f65854f87815c7a4b1a884dc77e5330009e8142ad8f5797a3fb58b4dbf"} Mar 19 17:52:02 crc kubenswrapper[4792]: I0319 17:52:02.964926 4792 generic.go:334] "Generic (PLEG): container finished" podID="46b21e25-b24c-4432-ac3f-b67eeee56c2b" containerID="aa54485d756ad58d1f7bb736ca2528e07fda93a0168db5f932218a9f7a320c76" exitCode=0 Mar 19 17:52:02 crc kubenswrapper[4792]: I0319 17:52:02.965231 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565712-gxqhr" event={"ID":"46b21e25-b24c-4432-ac3f-b67eeee56c2b","Type":"ContainerDied","Data":"aa54485d756ad58d1f7bb736ca2528e07fda93a0168db5f932218a9f7a320c76"} Mar 19 17:52:04 crc kubenswrapper[4792]: I0319 17:52:04.422930 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565712-gxqhr" Mar 19 17:52:04 crc kubenswrapper[4792]: I0319 17:52:04.554991 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsv9b\" (UniqueName: \"kubernetes.io/projected/46b21e25-b24c-4432-ac3f-b67eeee56c2b-kube-api-access-fsv9b\") pod \"46b21e25-b24c-4432-ac3f-b67eeee56c2b\" (UID: \"46b21e25-b24c-4432-ac3f-b67eeee56c2b\") " Mar 19 17:52:04 crc kubenswrapper[4792]: I0319 17:52:04.562512 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b21e25-b24c-4432-ac3f-b67eeee56c2b-kube-api-access-fsv9b" (OuterVolumeSpecName: "kube-api-access-fsv9b") pod "46b21e25-b24c-4432-ac3f-b67eeee56c2b" (UID: "46b21e25-b24c-4432-ac3f-b67eeee56c2b"). InnerVolumeSpecName "kube-api-access-fsv9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:52:04 crc kubenswrapper[4792]: I0319 17:52:04.657106 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsv9b\" (UniqueName: \"kubernetes.io/projected/46b21e25-b24c-4432-ac3f-b67eeee56c2b-kube-api-access-fsv9b\") on node \"crc\" DevicePath \"\"" Mar 19 17:52:04 crc kubenswrapper[4792]: I0319 17:52:04.987958 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565712-gxqhr" event={"ID":"46b21e25-b24c-4432-ac3f-b67eeee56c2b","Type":"ContainerDied","Data":"7ab268f65854f87815c7a4b1a884dc77e5330009e8142ad8f5797a3fb58b4dbf"} Mar 19 17:52:04 crc kubenswrapper[4792]: I0319 17:52:04.988010 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565712-gxqhr" Mar 19 17:52:04 crc kubenswrapper[4792]: I0319 17:52:04.988021 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ab268f65854f87815c7a4b1a884dc77e5330009e8142ad8f5797a3fb58b4dbf" Mar 19 17:52:05 crc kubenswrapper[4792]: I0319 17:52:05.506903 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565706-tsqq8"] Mar 19 17:52:05 crc kubenswrapper[4792]: I0319 17:52:05.520360 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565706-tsqq8"] Mar 19 17:52:05 crc kubenswrapper[4792]: I0319 17:52:05.762132 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9d1348-e5a6-4a1b-9167-c16e45cc0202" path="/var/lib/kubelet/pods/1e9d1348-e5a6-4a1b-9167-c16e45cc0202/volumes" Mar 19 17:52:09 crc kubenswrapper[4792]: I0319 17:52:09.176123 4792 scope.go:117] "RemoveContainer" containerID="b005ca739ea08e2cd5b6ffaf4b6fae2ac246be582e3c7575e00522909b9ed406" Mar 19 17:52:20 crc kubenswrapper[4792]: I0319 17:52:20.231219 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:52:20 crc kubenswrapper[4792]: I0319 17:52:20.231894 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:52:50 crc kubenswrapper[4792]: I0319 17:52:50.231217 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:52:50 crc kubenswrapper[4792]: I0319 17:52:50.231790 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.230483 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.231063 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.231127 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.232331 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.232431 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" gracePeriod=600 Mar 19 17:53:20 crc kubenswrapper[4792]: E0319 17:53:20.353140 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.833759 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" exitCode=0 Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.833855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77"} Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.834181 4792 scope.go:117] "RemoveContainer" containerID="edd4f1b44628e421771b3f63852beda9bb33be34a68db36d09092c01b872cad9" Mar 19 17:53:20 crc kubenswrapper[4792]: I0319 17:53:20.835285 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:53:20 crc kubenswrapper[4792]: E0319 17:53:20.835629 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:53:33 crc kubenswrapper[4792]: I0319 17:53:33.741009 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:53:33 crc kubenswrapper[4792]: E0319 17:53:33.741613 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:53:47 crc kubenswrapper[4792]: I0319 17:53:47.753529 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:53:47 crc kubenswrapper[4792]: E0319 17:53:47.754526 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.194475 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565714-jzd5w"] Mar 19 17:54:00 crc kubenswrapper[4792]: E0319 17:54:00.195522 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b21e25-b24c-4432-ac3f-b67eeee56c2b" containerName="oc" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.195535 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b21e25-b24c-4432-ac3f-b67eeee56c2b" containerName="oc" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.195829 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b21e25-b24c-4432-ac3f-b67eeee56c2b" containerName="oc" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.197259 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565714-jzd5w" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.199990 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.201042 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.201660 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.215271 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565714-jzd5w"] Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.294785 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvmmw\" (UniqueName: \"kubernetes.io/projected/b6be0703-6716-4513-927a-a61ddb46e519-kube-api-access-dvmmw\") pod \"auto-csr-approver-29565714-jzd5w\" (UID: \"b6be0703-6716-4513-927a-a61ddb46e519\") " pod="openshift-infra/auto-csr-approver-29565714-jzd5w" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.397504 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvmmw\" (UniqueName: \"kubernetes.io/projected/b6be0703-6716-4513-927a-a61ddb46e519-kube-api-access-dvmmw\") pod \"auto-csr-approver-29565714-jzd5w\" (UID: \"b6be0703-6716-4513-927a-a61ddb46e519\") " pod="openshift-infra/auto-csr-approver-29565714-jzd5w" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.655217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvmmw\" (UniqueName: \"kubernetes.io/projected/b6be0703-6716-4513-927a-a61ddb46e519-kube-api-access-dvmmw\") pod \"auto-csr-approver-29565714-jzd5w\" (UID: \"b6be0703-6716-4513-927a-a61ddb46e519\") " pod="openshift-infra/auto-csr-approver-29565714-jzd5w" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.741448 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:54:00 crc kubenswrapper[4792]: E0319 17:54:00.741823 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:54:00 crc kubenswrapper[4792]: I0319 17:54:00.817147 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565714-jzd5w" Mar 19 17:54:01 crc kubenswrapper[4792]: I0319 17:54:01.700422 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565714-jzd5w"] Mar 19 17:54:02 crc kubenswrapper[4792]: I0319 17:54:02.305350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565714-jzd5w" event={"ID":"b6be0703-6716-4513-927a-a61ddb46e519","Type":"ContainerStarted","Data":"2f981765bc804150ae3a0fbdc782f5becd67d6184e13efdcb3bdc78f6f8ba286"} Mar 19 17:54:04 crc kubenswrapper[4792]: I0319 17:54:04.347960 4792 generic.go:334] "Generic (PLEG): container finished" podID="b6be0703-6716-4513-927a-a61ddb46e519" containerID="4eb0576f2b997de30f144966f0a45605b87393168cc194929e64c9d17aded43f" exitCode=0 Mar 19 17:54:04 crc kubenswrapper[4792]: I0319 17:54:04.348067 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565714-jzd5w" event={"ID":"b6be0703-6716-4513-927a-a61ddb46e519","Type":"ContainerDied","Data":"4eb0576f2b997de30f144966f0a45605b87393168cc194929e64c9d17aded43f"} Mar 19 17:54:05 crc kubenswrapper[4792]: I0319 17:54:05.888102 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565714-jzd5w" Mar 19 17:54:05 crc kubenswrapper[4792]: I0319 17:54:05.969871 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvmmw\" (UniqueName: \"kubernetes.io/projected/b6be0703-6716-4513-927a-a61ddb46e519-kube-api-access-dvmmw\") pod \"b6be0703-6716-4513-927a-a61ddb46e519\" (UID: \"b6be0703-6716-4513-927a-a61ddb46e519\") " Mar 19 17:54:05 crc kubenswrapper[4792]: I0319 17:54:05.976019 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6be0703-6716-4513-927a-a61ddb46e519-kube-api-access-dvmmw" (OuterVolumeSpecName: "kube-api-access-dvmmw") pod "b6be0703-6716-4513-927a-a61ddb46e519" (UID: "b6be0703-6716-4513-927a-a61ddb46e519"). InnerVolumeSpecName "kube-api-access-dvmmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:54:06 crc kubenswrapper[4792]: I0319 17:54:06.072813 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvmmw\" (UniqueName: \"kubernetes.io/projected/b6be0703-6716-4513-927a-a61ddb46e519-kube-api-access-dvmmw\") on node \"crc\" DevicePath \"\"" Mar 19 17:54:06 crc kubenswrapper[4792]: I0319 17:54:06.370934 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565714-jzd5w" event={"ID":"b6be0703-6716-4513-927a-a61ddb46e519","Type":"ContainerDied","Data":"2f981765bc804150ae3a0fbdc782f5becd67d6184e13efdcb3bdc78f6f8ba286"} Mar 19 17:54:06 crc kubenswrapper[4792]: I0319 17:54:06.370988 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f981765bc804150ae3a0fbdc782f5becd67d6184e13efdcb3bdc78f6f8ba286" Mar 19 17:54:06 crc kubenswrapper[4792]: I0319 17:54:06.371059 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565714-jzd5w" Mar 19 17:54:06 crc kubenswrapper[4792]: I0319 17:54:06.968632 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565708-flg79"] Mar 19 17:54:06 crc kubenswrapper[4792]: I0319 17:54:06.980324 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565708-flg79"] Mar 19 17:54:07 crc kubenswrapper[4792]: I0319 17:54:07.760427 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7925ed-4bf9-4a06-a677-851abaec5d15" path="/var/lib/kubelet/pods/0a7925ed-4bf9-4a06-a677-851abaec5d15/volumes" Mar 19 17:54:09 crc kubenswrapper[4792]: I0319 17:54:09.339372 4792 scope.go:117] "RemoveContainer" containerID="7a58550cb1d6f7c5cdbaa66d9d4f9472118bb7558838b78603c35090dc1830db" Mar 19 17:54:15 crc kubenswrapper[4792]: I0319 17:54:15.740482 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:54:15 crc kubenswrapper[4792]: E0319 17:54:15.741212 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:54:29 crc kubenswrapper[4792]: I0319 17:54:29.740389 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:54:29 crc kubenswrapper[4792]: E0319 17:54:29.741551 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:54:42 crc kubenswrapper[4792]: I0319 17:54:42.740049 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:54:42 crc kubenswrapper[4792]: E0319 17:54:42.741359 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:54:52 crc kubenswrapper[4792]: E0319 17:54:52.638587 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.222:49980->38.102.83.222:39595: write tcp 38.102.83.222:49980->38.102.83.222:39595: write: broken pipe Mar 19 17:54:55 crc kubenswrapper[4792]: I0319 17:54:55.740177 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:54:55 crc kubenswrapper[4792]: E0319 17:54:55.741529 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:55:07 crc kubenswrapper[4792]: I0319 17:55:07.751961 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:55:07 crc kubenswrapper[4792]: E0319 17:55:07.753675 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:55:21 crc kubenswrapper[4792]: I0319 17:55:21.741680 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:55:21 crc kubenswrapper[4792]: E0319 17:55:21.742577 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:55:23 crc kubenswrapper[4792]: E0319 17:55:23.956520 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.222:59992->38.102.83.222:39595: write tcp 38.102.83.222:59992->38.102.83.222:39595: write: broken pipe Mar 19 17:55:32 crc kubenswrapper[4792]: I0319 17:55:32.739601 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:55:32 crc kubenswrapper[4792]: E0319 17:55:32.740423 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:55:47 crc kubenswrapper[4792]: I0319 17:55:47.776017 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:55:47 crc kubenswrapper[4792]: E0319 17:55:47.777547 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.146385 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565716-7r5gc"] Mar 19 17:56:00 crc kubenswrapper[4792]: E0319 17:56:00.147540 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6be0703-6716-4513-927a-a61ddb46e519" containerName="oc" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.147554 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6be0703-6716-4513-927a-a61ddb46e519" containerName="oc" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.147886 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6be0703-6716-4513-927a-a61ddb46e519" containerName="oc" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.148897 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565716-7r5gc" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.151801 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.151871 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.151932 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.160031 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565716-7r5gc"] Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.248942 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t84lr\" (UniqueName: \"kubernetes.io/projected/f166e811-2b95-4d7a-bf5e-872d7bc853fa-kube-api-access-t84lr\") pod \"auto-csr-approver-29565716-7r5gc\" (UID: \"f166e811-2b95-4d7a-bf5e-872d7bc853fa\") " pod="openshift-infra/auto-csr-approver-29565716-7r5gc" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.352213 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t84lr\" (UniqueName: \"kubernetes.io/projected/f166e811-2b95-4d7a-bf5e-872d7bc853fa-kube-api-access-t84lr\") pod \"auto-csr-approver-29565716-7r5gc\" (UID: \"f166e811-2b95-4d7a-bf5e-872d7bc853fa\") " pod="openshift-infra/auto-csr-approver-29565716-7r5gc" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.394323 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t84lr\" (UniqueName: \"kubernetes.io/projected/f166e811-2b95-4d7a-bf5e-872d7bc853fa-kube-api-access-t84lr\") pod \"auto-csr-approver-29565716-7r5gc\" (UID: \"f166e811-2b95-4d7a-bf5e-872d7bc853fa\") " pod="openshift-infra/auto-csr-approver-29565716-7r5gc" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.472354 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565716-7r5gc" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.741283 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:56:00 crc kubenswrapper[4792]: E0319 17:56:00.741754 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:56:00 crc kubenswrapper[4792]: I0319 17:56:00.980963 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565716-7r5gc"] Mar 19 17:56:01 crc kubenswrapper[4792]: I0319 17:56:01.204874 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565716-7r5gc" event={"ID":"f166e811-2b95-4d7a-bf5e-872d7bc853fa","Type":"ContainerStarted","Data":"4997b7deed18811f47df1aa817db6187de3194fd3661496f57ec82176de2c88b"} Mar 19 17:56:03 crc kubenswrapper[4792]: I0319 17:56:03.262383 4792 generic.go:334] "Generic (PLEG): container finished" podID="f166e811-2b95-4d7a-bf5e-872d7bc853fa" containerID="844b13116fe2e66c1f3ecea024ecd0aea7692b7391847b7f030c8931aafae78e" exitCode=0 Mar 19 17:56:03 crc kubenswrapper[4792]: I0319 17:56:03.262438 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565716-7r5gc" event={"ID":"f166e811-2b95-4d7a-bf5e-872d7bc853fa","Type":"ContainerDied","Data":"844b13116fe2e66c1f3ecea024ecd0aea7692b7391847b7f030c8931aafae78e"} Mar 19 17:56:04 crc kubenswrapper[4792]: I0319 17:56:04.840118 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565716-7r5gc" Mar 19 17:56:04 crc kubenswrapper[4792]: I0319 17:56:04.964656 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t84lr\" (UniqueName: \"kubernetes.io/projected/f166e811-2b95-4d7a-bf5e-872d7bc853fa-kube-api-access-t84lr\") pod \"f166e811-2b95-4d7a-bf5e-872d7bc853fa\" (UID: \"f166e811-2b95-4d7a-bf5e-872d7bc853fa\") " Mar 19 17:56:05 crc kubenswrapper[4792]: I0319 17:56:05.010498 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f166e811-2b95-4d7a-bf5e-872d7bc853fa-kube-api-access-t84lr" (OuterVolumeSpecName: "kube-api-access-t84lr") pod "f166e811-2b95-4d7a-bf5e-872d7bc853fa" (UID: "f166e811-2b95-4d7a-bf5e-872d7bc853fa"). InnerVolumeSpecName "kube-api-access-t84lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:56:05 crc kubenswrapper[4792]: I0319 17:56:05.068564 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t84lr\" (UniqueName: \"kubernetes.io/projected/f166e811-2b95-4d7a-bf5e-872d7bc853fa-kube-api-access-t84lr\") on node \"crc\" DevicePath \"\"" Mar 19 17:56:05 crc kubenswrapper[4792]: I0319 17:56:05.283476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565716-7r5gc" event={"ID":"f166e811-2b95-4d7a-bf5e-872d7bc853fa","Type":"ContainerDied","Data":"4997b7deed18811f47df1aa817db6187de3194fd3661496f57ec82176de2c88b"} Mar 19 17:56:05 crc kubenswrapper[4792]: I0319 17:56:05.283509 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565716-7r5gc" Mar 19 17:56:05 crc kubenswrapper[4792]: I0319 17:56:05.283519 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4997b7deed18811f47df1aa817db6187de3194fd3661496f57ec82176de2c88b" Mar 19 17:56:05 crc kubenswrapper[4792]: I0319 17:56:05.909993 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565710-7sqw9"] Mar 19 17:56:05 crc kubenswrapper[4792]: I0319 17:56:05.921605 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565710-7sqw9"] Mar 19 17:56:07 crc kubenswrapper[4792]: I0319 17:56:07.755153 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e6c2c75-b252-43cc-8e2c-063daf3bf3dc" path="/var/lib/kubelet/pods/9e6c2c75-b252-43cc-8e2c-063daf3bf3dc/volumes" Mar 19 17:56:09 crc kubenswrapper[4792]: I0319 17:56:09.481606 4792 scope.go:117] "RemoveContainer" containerID="bff42a3e55ff12662156279fafb0dae3b4eccc9d861438c2209d60c4f22bae7a" Mar 19 17:56:13 crc kubenswrapper[4792]: I0319 17:56:13.740596 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:56:13 crc kubenswrapper[4792]: E0319 17:56:13.743197 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:56:26 crc kubenswrapper[4792]: I0319 17:56:26.739634 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:56:26 crc kubenswrapper[4792]: E0319 17:56:26.740505 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:56:38 crc kubenswrapper[4792]: I0319 17:56:38.739622 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:56:38 crc kubenswrapper[4792]: E0319 17:56:38.740573 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:56:51 crc kubenswrapper[4792]: I0319 17:56:51.740817 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:56:51 crc kubenswrapper[4792]: E0319 17:56:51.742156 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:57:05 crc kubenswrapper[4792]: I0319 17:57:05.740301 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:57:05 crc kubenswrapper[4792]: E0319 17:57:05.741288 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:57:20 crc kubenswrapper[4792]: I0319 17:57:20.740687 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:57:20 crc kubenswrapper[4792]: E0319 17:57:20.742333 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:57:35 crc kubenswrapper[4792]: I0319 17:57:35.740180 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:57:35 crc kubenswrapper[4792]: E0319 17:57:35.741487 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:57:48 crc kubenswrapper[4792]: I0319 17:57:48.741108 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:57:48 crc kubenswrapper[4792]: E0319 17:57:48.741796 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.170181 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565718-j7wlx"] Mar 19 17:58:00 crc kubenswrapper[4792]: E0319 17:58:00.171341 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f166e811-2b95-4d7a-bf5e-872d7bc853fa" containerName="oc" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.171357 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f166e811-2b95-4d7a-bf5e-872d7bc853fa" containerName="oc" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.171662 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f166e811-2b95-4d7a-bf5e-872d7bc853fa" containerName="oc" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.172800 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565718-j7wlx" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.177316 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.177397 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.177784 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.182247 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565718-j7wlx"] Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.299466 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6td5f\" (UniqueName: \"kubernetes.io/projected/c1adc70a-0ee6-4cd8-a5d8-89f968d785a6-kube-api-access-6td5f\") pod \"auto-csr-approver-29565718-j7wlx\" (UID: \"c1adc70a-0ee6-4cd8-a5d8-89f968d785a6\") " pod="openshift-infra/auto-csr-approver-29565718-j7wlx" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.402192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6td5f\" (UniqueName: \"kubernetes.io/projected/c1adc70a-0ee6-4cd8-a5d8-89f968d785a6-kube-api-access-6td5f\") pod \"auto-csr-approver-29565718-j7wlx\" (UID: \"c1adc70a-0ee6-4cd8-a5d8-89f968d785a6\") " pod="openshift-infra/auto-csr-approver-29565718-j7wlx" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.421644 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6td5f\" (UniqueName: \"kubernetes.io/projected/c1adc70a-0ee6-4cd8-a5d8-89f968d785a6-kube-api-access-6td5f\") pod \"auto-csr-approver-29565718-j7wlx\" (UID: \"c1adc70a-0ee6-4cd8-a5d8-89f968d785a6\") " pod="openshift-infra/auto-csr-approver-29565718-j7wlx" Mar 19 17:58:00 crc kubenswrapper[4792]: I0319 17:58:00.499221 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565718-j7wlx" Mar 19 17:58:01 crc kubenswrapper[4792]: I0319 17:58:01.022291 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 17:58:01 crc kubenswrapper[4792]: I0319 17:58:01.115890 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565718-j7wlx"] Mar 19 17:58:01 crc kubenswrapper[4792]: I0319 17:58:01.642615 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565718-j7wlx" event={"ID":"c1adc70a-0ee6-4cd8-a5d8-89f968d785a6","Type":"ContainerStarted","Data":"2cf56a412e07b7c1b55c7f81cd13b4c011854af94950318bba7a1abc9108ae04"} Mar 19 17:58:02 crc kubenswrapper[4792]: I0319 17:58:02.669594 4792 generic.go:334] "Generic (PLEG): container finished" podID="c1adc70a-0ee6-4cd8-a5d8-89f968d785a6" containerID="c821492a41e4e28213fd5c8fd0480a7958a2672bafdfa46b315b1fcd778bac3c" exitCode=0 Mar 19 17:58:02 crc kubenswrapper[4792]: I0319 17:58:02.670053 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565718-j7wlx" event={"ID":"c1adc70a-0ee6-4cd8-a5d8-89f968d785a6","Type":"ContainerDied","Data":"c821492a41e4e28213fd5c8fd0480a7958a2672bafdfa46b315b1fcd778bac3c"} Mar 19 17:58:02 crc kubenswrapper[4792]: I0319 17:58:02.740578 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:58:02 crc kubenswrapper[4792]: E0319 17:58:02.741110 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:58:04 crc kubenswrapper[4792]: I0319 17:58:04.654244 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565718-j7wlx" Mar 19 17:58:04 crc kubenswrapper[4792]: I0319 17:58:04.694473 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565718-j7wlx" event={"ID":"c1adc70a-0ee6-4cd8-a5d8-89f968d785a6","Type":"ContainerDied","Data":"2cf56a412e07b7c1b55c7f81cd13b4c011854af94950318bba7a1abc9108ae04"} Mar 19 17:58:04 crc kubenswrapper[4792]: I0319 17:58:04.694515 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565718-j7wlx" Mar 19 17:58:04 crc kubenswrapper[4792]: I0319 17:58:04.694524 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cf56a412e07b7c1b55c7f81cd13b4c011854af94950318bba7a1abc9108ae04" Mar 19 17:58:04 crc kubenswrapper[4792]: I0319 17:58:04.812039 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6td5f\" (UniqueName: \"kubernetes.io/projected/c1adc70a-0ee6-4cd8-a5d8-89f968d785a6-kube-api-access-6td5f\") pod \"c1adc70a-0ee6-4cd8-a5d8-89f968d785a6\" (UID: \"c1adc70a-0ee6-4cd8-a5d8-89f968d785a6\") " Mar 19 17:58:04 crc kubenswrapper[4792]: I0319 17:58:04.827249 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1adc70a-0ee6-4cd8-a5d8-89f968d785a6-kube-api-access-6td5f" (OuterVolumeSpecName: "kube-api-access-6td5f") pod "c1adc70a-0ee6-4cd8-a5d8-89f968d785a6" (UID: "c1adc70a-0ee6-4cd8-a5d8-89f968d785a6"). InnerVolumeSpecName "kube-api-access-6td5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:58:04 crc kubenswrapper[4792]: I0319 17:58:04.914897 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6td5f\" (UniqueName: \"kubernetes.io/projected/c1adc70a-0ee6-4cd8-a5d8-89f968d785a6-kube-api-access-6td5f\") on node \"crc\" DevicePath \"\"" Mar 19 17:58:05 crc kubenswrapper[4792]: I0319 17:58:05.723476 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565712-gxqhr"] Mar 19 17:58:05 crc kubenswrapper[4792]: I0319 17:58:05.738103 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565712-gxqhr"] Mar 19 17:58:05 crc kubenswrapper[4792]: I0319 17:58:05.757483 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b21e25-b24c-4432-ac3f-b67eeee56c2b" path="/var/lib/kubelet/pods/46b21e25-b24c-4432-ac3f-b67eeee56c2b/volumes" Mar 19 17:58:09 crc kubenswrapper[4792]: I0319 17:58:09.595556 4792 scope.go:117] "RemoveContainer" containerID="aa54485d756ad58d1f7bb736ca2528e07fda93a0168db5f932218a9f7a320c76" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.115763 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mdn4s"] Mar 19 17:58:15 crc kubenswrapper[4792]: E0319 17:58:15.116825 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1adc70a-0ee6-4cd8-a5d8-89f968d785a6" containerName="oc" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.116838 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1adc70a-0ee6-4cd8-a5d8-89f968d785a6" containerName="oc" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.117105 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1adc70a-0ee6-4cd8-a5d8-89f968d785a6" containerName="oc" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.122405 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.130014 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdn4s"] Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.263747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-catalog-content\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.263825 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-utilities\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.264004 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9kx\" (UniqueName: \"kubernetes.io/projected/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-kube-api-access-ms9kx\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.366001 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-catalog-content\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.366068 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-utilities\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.366133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9kx\" (UniqueName: \"kubernetes.io/projected/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-kube-api-access-ms9kx\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.366617 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-catalog-content\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.366663 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-utilities\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:15 crc kubenswrapper[4792]: I0319 17:58:15.911209 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9kx\" (UniqueName: \"kubernetes.io/projected/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-kube-api-access-ms9kx\") pod \"redhat-operators-mdn4s\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:16 crc kubenswrapper[4792]: I0319 17:58:16.052577 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:16 crc kubenswrapper[4792]: I0319 17:58:16.569314 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdn4s"] Mar 19 17:58:16 crc kubenswrapper[4792]: I0319 17:58:16.827313 4792 generic.go:334] "Generic (PLEG): container finished" podID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerID="7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c" exitCode=0 Mar 19 17:58:16 crc kubenswrapper[4792]: I0319 17:58:16.827380 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdn4s" event={"ID":"1f14a84d-306d-4fc7-8fa2-9f8168e76f06","Type":"ContainerDied","Data":"7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c"} Mar 19 17:58:16 crc kubenswrapper[4792]: I0319 17:58:16.827648 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdn4s" event={"ID":"1f14a84d-306d-4fc7-8fa2-9f8168e76f06","Type":"ContainerStarted","Data":"c33eb8605ec7581486d035248a3d1f2232ce73e0bd4057f53cc58137753e612a"} Mar 19 17:58:17 crc kubenswrapper[4792]: I0319 17:58:17.747063 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:58:17 crc kubenswrapper[4792]: E0319 17:58:17.747900 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 17:58:18 crc kubenswrapper[4792]: I0319 17:58:18.868627 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdn4s" event={"ID":"1f14a84d-306d-4fc7-8fa2-9f8168e76f06","Type":"ContainerStarted","Data":"2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d"} Mar 19 17:58:23 crc kubenswrapper[4792]: I0319 17:58:23.924538 4792 generic.go:334] "Generic (PLEG): container finished" podID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerID="2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d" exitCode=0 Mar 19 17:58:23 crc kubenswrapper[4792]: I0319 17:58:23.924632 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdn4s" event={"ID":"1f14a84d-306d-4fc7-8fa2-9f8168e76f06","Type":"ContainerDied","Data":"2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d"} Mar 19 17:58:25 crc kubenswrapper[4792]: I0319 17:58:25.947496 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdn4s" event={"ID":"1f14a84d-306d-4fc7-8fa2-9f8168e76f06","Type":"ContainerStarted","Data":"fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee"} Mar 19 17:58:25 crc kubenswrapper[4792]: I0319 17:58:25.979948 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mdn4s" podStartSLOduration=2.774509267 podStartE2EDuration="10.979928023s" podCreationTimestamp="2026-03-19 17:58:15 +0000 UTC" firstStartedPulling="2026-03-19 17:58:16.829160083 +0000 UTC m=+4659.975217623" lastFinishedPulling="2026-03-19 17:58:25.034578849 +0000 UTC m=+4668.180636379" observedRunningTime="2026-03-19 17:58:25.975282856 +0000 UTC m=+4669.121340396" watchObservedRunningTime="2026-03-19 17:58:25.979928023 +0000 UTC m=+4669.125985563" Mar 19 17:58:26 crc kubenswrapper[4792]: I0319 17:58:26.053464 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:26 crc kubenswrapper[4792]: I0319 17:58:26.053949 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:58:27 crc kubenswrapper[4792]: I0319 17:58:27.457468 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mdn4s" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="registry-server" probeResult="failure" output=< Mar 19 17:58:27 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:58:27 crc kubenswrapper[4792]: > Mar 19 17:58:28 crc kubenswrapper[4792]: I0319 17:58:28.741019 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 17:58:30 crc kubenswrapper[4792]: I0319 17:58:30.023356 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"72469a44f2a722113f67c35613f06445f8eb914775e86b2980ab0a82d9718925"} Mar 19 17:58:37 crc kubenswrapper[4792]: I0319 17:58:37.101546 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mdn4s" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="registry-server" probeResult="failure" output=< Mar 19 17:58:37 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:58:37 crc kubenswrapper[4792]: > Mar 19 17:58:47 crc kubenswrapper[4792]: I0319 17:58:47.100446 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mdn4s" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="registry-server" probeResult="failure" output=< Mar 19 17:58:47 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:58:47 crc kubenswrapper[4792]: > Mar 19 17:58:57 crc kubenswrapper[4792]: I0319 17:58:57.104421 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mdn4s" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="registry-server" probeResult="failure" output=< Mar 19 17:58:57 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 17:58:57 crc kubenswrapper[4792]: > Mar 19 17:59:06 crc kubenswrapper[4792]: I0319 17:59:06.122294 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:59:06 crc kubenswrapper[4792]: I0319 17:59:06.180427 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:59:06 crc kubenswrapper[4792]: I0319 17:59:06.368484 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdn4s"] Mar 19 17:59:07 crc kubenswrapper[4792]: I0319 17:59:07.756192 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mdn4s" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="registry-server" containerID="cri-o://fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee" gracePeriod=2 Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.382791 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.405727 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-utilities\") pod \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.406013 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms9kx\" (UniqueName: \"kubernetes.io/projected/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-kube-api-access-ms9kx\") pod \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.406085 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-catalog-content\") pod \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\" (UID: \"1f14a84d-306d-4fc7-8fa2-9f8168e76f06\") " Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.408379 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-utilities" (OuterVolumeSpecName: "utilities") pod "1f14a84d-306d-4fc7-8fa2-9f8168e76f06" (UID: "1f14a84d-306d-4fc7-8fa2-9f8168e76f06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.420127 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-kube-api-access-ms9kx" (OuterVolumeSpecName: "kube-api-access-ms9kx") pod "1f14a84d-306d-4fc7-8fa2-9f8168e76f06" (UID: "1f14a84d-306d-4fc7-8fa2-9f8168e76f06"). InnerVolumeSpecName "kube-api-access-ms9kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.509680 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms9kx\" (UniqueName: \"kubernetes.io/projected/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-kube-api-access-ms9kx\") on node \"crc\" DevicePath \"\"" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.509718 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.661200 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f14a84d-306d-4fc7-8fa2-9f8168e76f06" (UID: "1f14a84d-306d-4fc7-8fa2-9f8168e76f06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.713806 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f14a84d-306d-4fc7-8fa2-9f8168e76f06-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.770145 4792 generic.go:334] "Generic (PLEG): container finished" podID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerID="fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee" exitCode=0 Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.770188 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdn4s" event={"ID":"1f14a84d-306d-4fc7-8fa2-9f8168e76f06","Type":"ContainerDied","Data":"fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee"} Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.770216 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdn4s" event={"ID":"1f14a84d-306d-4fc7-8fa2-9f8168e76f06","Type":"ContainerDied","Data":"c33eb8605ec7581486d035248a3d1f2232ce73e0bd4057f53cc58137753e612a"} Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.770233 4792 scope.go:117] "RemoveContainer" containerID="fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.770239 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdn4s" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.799437 4792 scope.go:117] "RemoveContainer" containerID="2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.822894 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdn4s"] Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.837461 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mdn4s"] Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.841612 4792 scope.go:117] "RemoveContainer" containerID="7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.898789 4792 scope.go:117] "RemoveContainer" containerID="fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee" Mar 19 17:59:08 crc kubenswrapper[4792]: E0319 17:59:08.899907 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee\": container with ID starting with fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee not found: ID does not exist" containerID="fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.899957 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee"} err="failed to get container status \"fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee\": rpc error: code = NotFound desc = could not find container \"fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee\": container with ID starting with fd149e20c0f12b44e05eedd5d5cbb6ee8f19542dd923cb64e94063cc4e139cee not found: ID does not exist" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.899989 4792 scope.go:117] "RemoveContainer" containerID="2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d" Mar 19 17:59:08 crc kubenswrapper[4792]: E0319 17:59:08.900310 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d\": container with ID starting with 2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d not found: ID does not exist" containerID="2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.900342 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d"} err="failed to get container status \"2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d\": rpc error: code = NotFound desc = could not find container \"2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d\": container with ID starting with 2f6bf3ba9e0d28f88735f86fafaad7c21770d35531a2214a464256bccda2360d not found: ID does not exist" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.900358 4792 scope.go:117] "RemoveContainer" containerID="7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c" Mar 19 17:59:08 crc kubenswrapper[4792]: E0319 17:59:08.900691 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c\": container with ID starting with 7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c not found: ID does not exist" containerID="7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c" Mar 19 17:59:08 crc kubenswrapper[4792]: I0319 17:59:08.900711 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c"} err="failed to get container status \"7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c\": rpc error: code = NotFound desc = could not find container \"7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c\": container with ID starting with 7307482a7b0fd8ad45b3f779f1190ee3cd80c1f64205505007302d2a3e1a191c not found: ID does not exist" Mar 19 17:59:09 crc kubenswrapper[4792]: I0319 17:59:09.753951 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" path="/var/lib/kubelet/pods/1f14a84d-306d-4fc7-8fa2-9f8168e76f06/volumes" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.460151 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 19 17:59:45 crc kubenswrapper[4792]: E0319 17:59:45.461335 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="registry-server" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.461355 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="registry-server" Mar 19 17:59:45 crc kubenswrapper[4792]: E0319 17:59:45.461383 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="extract-utilities" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.461391 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="extract-utilities" Mar 19 17:59:45 crc kubenswrapper[4792]: E0319 17:59:45.461425 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="extract-content" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.461434 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="extract-content" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.461768 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f14a84d-306d-4fc7-8fa2-9f8168e76f06" containerName="registry-server" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.462874 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.466386 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.466616 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.466829 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.466991 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mtvkb" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.474302 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.601497 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.601631 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.601667 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.601690 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.601707 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-config-data\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.601744 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.602038 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.602149 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxks5\" (UniqueName: \"kubernetes.io/projected/d29d0577-d9f9-4402-a79d-06557b2f2826-kube-api-access-kxks5\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.602505 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.704419 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.704761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxks5\" (UniqueName: \"kubernetes.io/projected/d29d0577-d9f9-4402-a79d-06557b2f2826-kube-api-access-kxks5\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.704920 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.704948 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.705050 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.705060 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.705087 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.705111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.705127 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-config-data\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.705166 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.705291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.706769 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.707610 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-config-data\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.712127 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.712675 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.712911 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.713510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.720338 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxks5\" (UniqueName: \"kubernetes.io/projected/d29d0577-d9f9-4402-a79d-06557b2f2826-kube-api-access-kxks5\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.749759 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " pod="openstack/tempest-tests-tempest" Mar 19 17:59:45 crc kubenswrapper[4792]: I0319 17:59:45.788667 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 19 17:59:46 crc kubenswrapper[4792]: I0319 17:59:46.318253 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 19 17:59:47 crc kubenswrapper[4792]: I0319 17:59:47.162178 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d29d0577-d9f9-4402-a79d-06557b2f2826","Type":"ContainerStarted","Data":"c11343480aa900caeae361df8b2e66bfe23bb6442e5fa3a8288620d159c55dcc"} Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.161635 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565720-27gv7"] Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.166111 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565720-27gv7" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.179286 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.179379 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.179435 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.181092 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565720-27gv7"] Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.199600 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d9cd\" (UniqueName: \"kubernetes.io/projected/274cacda-9f26-4e2a-8f66-6159174913b4-kube-api-access-7d9cd\") pod \"auto-csr-approver-29565720-27gv7\" (UID: \"274cacda-9f26-4e2a-8f66-6159174913b4\") " pod="openshift-infra/auto-csr-approver-29565720-27gv7" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.268831 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz"] Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.270615 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.272547 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.273017 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.281772 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz"] Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.302416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d9cd\" (UniqueName: \"kubernetes.io/projected/274cacda-9f26-4e2a-8f66-6159174913b4-kube-api-access-7d9cd\") pod \"auto-csr-approver-29565720-27gv7\" (UID: \"274cacda-9f26-4e2a-8f66-6159174913b4\") " pod="openshift-infra/auto-csr-approver-29565720-27gv7" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.322155 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d9cd\" (UniqueName: \"kubernetes.io/projected/274cacda-9f26-4e2a-8f66-6159174913b4-kube-api-access-7d9cd\") pod \"auto-csr-approver-29565720-27gv7\" (UID: \"274cacda-9f26-4e2a-8f66-6159174913b4\") " pod="openshift-infra/auto-csr-approver-29565720-27gv7" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.404248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a807992e-e927-4fe1-9529-f840bbc96f02-secret-volume\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.404562 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-595dh\" (UniqueName: \"kubernetes.io/projected/a807992e-e927-4fe1-9529-f840bbc96f02-kube-api-access-595dh\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.404761 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a807992e-e927-4fe1-9529-f840bbc96f02-config-volume\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.498176 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565720-27gv7" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.509055 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a807992e-e927-4fe1-9529-f840bbc96f02-config-volume\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.509412 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a807992e-e927-4fe1-9529-f840bbc96f02-secret-volume\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.510248 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a807992e-e927-4fe1-9529-f840bbc96f02-config-volume\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.511086 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-595dh\" (UniqueName: \"kubernetes.io/projected/a807992e-e927-4fe1-9529-f840bbc96f02-kube-api-access-595dh\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.531950 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a807992e-e927-4fe1-9529-f840bbc96f02-secret-volume\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.534788 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-595dh\" (UniqueName: \"kubernetes.io/projected/a807992e-e927-4fe1-9529-f840bbc96f02-kube-api-access-595dh\") pod \"collect-profiles-29565720-rzncz\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:00 crc kubenswrapper[4792]: I0319 18:00:00.596881 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:03 crc kubenswrapper[4792]: I0319 18:00:03.938465 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69pp5"] Mar 19 18:00:03 crc kubenswrapper[4792]: I0319 18:00:03.943229 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.016410 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-catalog-content\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.016818 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2rc\" (UniqueName: \"kubernetes.io/projected/f64f95ae-e111-4279-ae68-f1f62146ab86-kube-api-access-fp2rc\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.016980 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-utilities\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.103040 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69pp5"] Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.120931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-catalog-content\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.120995 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp2rc\" (UniqueName: \"kubernetes.io/projected/f64f95ae-e111-4279-ae68-f1f62146ab86-kube-api-access-fp2rc\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.121078 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-utilities\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.121744 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-utilities\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.122028 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-catalog-content\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.188142 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp2rc\" (UniqueName: \"kubernetes.io/projected/f64f95ae-e111-4279-ae68-f1f62146ab86-kube-api-access-fp2rc\") pod \"community-operators-69pp5\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:04 crc kubenswrapper[4792]: I0319 18:00:04.284485 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.159018 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9pbxd"] Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.163225 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.186306 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pbxd"] Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.293083 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-utilities\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.293238 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw75g\" (UniqueName: \"kubernetes.io/projected/183c3de8-51a8-4310-aebe-a7a729a4a56d-kube-api-access-pw75g\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.293280 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-catalog-content\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.396119 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-utilities\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.396480 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw75g\" (UniqueName: \"kubernetes.io/projected/183c3de8-51a8-4310-aebe-a7a729a4a56d-kube-api-access-pw75g\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.396522 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-catalog-content\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.396642 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-utilities\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.396965 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-catalog-content\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.421411 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw75g\" (UniqueName: \"kubernetes.io/projected/183c3de8-51a8-4310-aebe-a7a729a4a56d-kube-api-access-pw75g\") pod \"redhat-marketplace-9pbxd\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:13 crc kubenswrapper[4792]: I0319 18:00:13.493132 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:21 crc kubenswrapper[4792]: E0319 18:00:21.838786 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 19 18:00:21 crc kubenswrapper[4792]: E0319 18:00:21.842724 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d29d0577-d9f9-4402-a79d-06557b2f2826): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 18:00:21 crc kubenswrapper[4792]: E0319 18:00:21.844171 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d29d0577-d9f9-4402-a79d-06557b2f2826" Mar 19 18:00:22 crc kubenswrapper[4792]: E0319 18:00:22.550674 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d29d0577-d9f9-4402-a79d-06557b2f2826" Mar 19 18:00:22 crc kubenswrapper[4792]: I0319 18:00:22.618595 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565720-27gv7"] Mar 19 18:00:22 crc kubenswrapper[4792]: W0319 18:00:22.626705 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf64f95ae_e111_4279_ae68_f1f62146ab86.slice/crio-ef3d3de614d4e584e221423b6f55e30072b1cb0e5489dc6e15f6db761b61c660 WatchSource:0}: Error finding container ef3d3de614d4e584e221423b6f55e30072b1cb0e5489dc6e15f6db761b61c660: Status 404 returned error can't find the container with id ef3d3de614d4e584e221423b6f55e30072b1cb0e5489dc6e15f6db761b61c660 Mar 19 18:00:22 crc kubenswrapper[4792]: I0319 18:00:22.638562 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz"] Mar 19 18:00:22 crc kubenswrapper[4792]: I0319 18:00:22.653118 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69pp5"] Mar 19 18:00:22 crc kubenswrapper[4792]: I0319 18:00:22.803584 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pbxd"] Mar 19 18:00:22 crc kubenswrapper[4792]: W0319 18:00:22.824791 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod183c3de8_51a8_4310_aebe_a7a729a4a56d.slice/crio-63f149b961990c3daf8e9d9154192cea78865a6fedadfe587bb5d0ffe0455592 WatchSource:0}: Error finding container 63f149b961990c3daf8e9d9154192cea78865a6fedadfe587bb5d0ffe0455592: Status 404 returned error can't find the container with id 63f149b961990c3daf8e9d9154192cea78865a6fedadfe587bb5d0ffe0455592 Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.618298 4792 generic.go:334] "Generic (PLEG): container finished" podID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerID="fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908" exitCode=0 Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.618785 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pbxd" event={"ID":"183c3de8-51a8-4310-aebe-a7a729a4a56d","Type":"ContainerDied","Data":"fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908"} Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.618812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pbxd" event={"ID":"183c3de8-51a8-4310-aebe-a7a729a4a56d","Type":"ContainerStarted","Data":"63f149b961990c3daf8e9d9154192cea78865a6fedadfe587bb5d0ffe0455592"} Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.634440 4792 generic.go:334] "Generic (PLEG): container finished" podID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerID="a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c" exitCode=0 Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.634561 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69pp5" event={"ID":"f64f95ae-e111-4279-ae68-f1f62146ab86","Type":"ContainerDied","Data":"a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c"} Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.634592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69pp5" event={"ID":"f64f95ae-e111-4279-ae68-f1f62146ab86","Type":"ContainerStarted","Data":"ef3d3de614d4e584e221423b6f55e30072b1cb0e5489dc6e15f6db761b61c660"} Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.679098 4792 generic.go:334] "Generic (PLEG): container finished" podID="a807992e-e927-4fe1-9529-f840bbc96f02" containerID="c8dd7a33955fbbd882a0750fabda523cb0eccf6f3b580ce3dbc0953311998352" exitCode=0 Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.679180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" event={"ID":"a807992e-e927-4fe1-9529-f840bbc96f02","Type":"ContainerDied","Data":"c8dd7a33955fbbd882a0750fabda523cb0eccf6f3b580ce3dbc0953311998352"} Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.679205 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" event={"ID":"a807992e-e927-4fe1-9529-f840bbc96f02","Type":"ContainerStarted","Data":"e2835d7dbc58ecae11264dbfcaa799fba00e06a9c051c03c9afdbb833842435f"} Mar 19 18:00:23 crc kubenswrapper[4792]: I0319 18:00:23.687175 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565720-27gv7" event={"ID":"274cacda-9f26-4e2a-8f66-6159174913b4","Type":"ContainerStarted","Data":"9ef3baba84dd72adae4e87f2e274bc5cab210fd53c3cc21e65d76aff0c3f56b0"} Mar 19 18:00:24 crc kubenswrapper[4792]: I0319 18:00:24.702284 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69pp5" event={"ID":"f64f95ae-e111-4279-ae68-f1f62146ab86","Type":"ContainerStarted","Data":"2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666"} Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.093046 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.142421 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-595dh\" (UniqueName: \"kubernetes.io/projected/a807992e-e927-4fe1-9529-f840bbc96f02-kube-api-access-595dh\") pod \"a807992e-e927-4fe1-9529-f840bbc96f02\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.142754 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a807992e-e927-4fe1-9529-f840bbc96f02-secret-volume\") pod \"a807992e-e927-4fe1-9529-f840bbc96f02\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.142906 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a807992e-e927-4fe1-9529-f840bbc96f02-config-volume\") pod \"a807992e-e927-4fe1-9529-f840bbc96f02\" (UID: \"a807992e-e927-4fe1-9529-f840bbc96f02\") " Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.143978 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a807992e-e927-4fe1-9529-f840bbc96f02-config-volume" (OuterVolumeSpecName: "config-volume") pod "a807992e-e927-4fe1-9529-f840bbc96f02" (UID: "a807992e-e927-4fe1-9529-f840bbc96f02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.154103 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a807992e-e927-4fe1-9529-f840bbc96f02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a807992e-e927-4fe1-9529-f840bbc96f02" (UID: "a807992e-e927-4fe1-9529-f840bbc96f02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.154198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a807992e-e927-4fe1-9529-f840bbc96f02-kube-api-access-595dh" (OuterVolumeSpecName: "kube-api-access-595dh") pod "a807992e-e927-4fe1-9529-f840bbc96f02" (UID: "a807992e-e927-4fe1-9529-f840bbc96f02"). InnerVolumeSpecName "kube-api-access-595dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.244881 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a807992e-e927-4fe1-9529-f840bbc96f02-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.244912 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a807992e-e927-4fe1-9529-f840bbc96f02-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.244922 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-595dh\" (UniqueName: \"kubernetes.io/projected/a807992e-e927-4fe1-9529-f840bbc96f02-kube-api-access-595dh\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.711997 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565720-27gv7" event={"ID":"274cacda-9f26-4e2a-8f66-6159174913b4","Type":"ContainerStarted","Data":"16ac63c661a32b64c2efdab46ceec4ce680f23db49ec154150cf1830083f1ed1"} Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.716359 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pbxd" event={"ID":"183c3de8-51a8-4310-aebe-a7a729a4a56d","Type":"ContainerStarted","Data":"489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e"} Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.718126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" event={"ID":"a807992e-e927-4fe1-9529-f840bbc96f02","Type":"ContainerDied","Data":"e2835d7dbc58ecae11264dbfcaa799fba00e06a9c051c03c9afdbb833842435f"} Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.718181 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2835d7dbc58ecae11264dbfcaa799fba00e06a9c051c03c9afdbb833842435f" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.718155 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565720-rzncz" Mar 19 18:00:25 crc kubenswrapper[4792]: I0319 18:00:25.728177 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565720-27gv7" podStartSLOduration=24.434473312 podStartE2EDuration="25.728159937s" podCreationTimestamp="2026-03-19 18:00:00 +0000 UTC" firstStartedPulling="2026-03-19 18:00:22.64190412 +0000 UTC m=+4785.787961660" lastFinishedPulling="2026-03-19 18:00:23.935590745 +0000 UTC m=+4787.081648285" observedRunningTime="2026-03-19 18:00:25.724407416 +0000 UTC m=+4788.870464956" watchObservedRunningTime="2026-03-19 18:00:25.728159937 +0000 UTC m=+4788.874217477" Mar 19 18:00:26 crc kubenswrapper[4792]: I0319 18:00:26.200924 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv"] Mar 19 18:00:26 crc kubenswrapper[4792]: I0319 18:00:26.214454 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565675-k7vjv"] Mar 19 18:00:27 crc kubenswrapper[4792]: I0319 18:00:27.744981 4792 generic.go:334] "Generic (PLEG): container finished" podID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerID="489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e" exitCode=0 Mar 19 18:00:27 crc kubenswrapper[4792]: I0319 18:00:27.747426 4792 generic.go:334] "Generic (PLEG): container finished" podID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerID="2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666" exitCode=0 Mar 19 18:00:27 crc kubenswrapper[4792]: I0319 18:00:27.758239 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8e0d3b-3d92-47a8-a0ca-34d66790a567" path="/var/lib/kubelet/pods/6b8e0d3b-3d92-47a8-a0ca-34d66790a567/volumes" Mar 19 18:00:27 crc kubenswrapper[4792]: I0319 18:00:27.759370 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pbxd" event={"ID":"183c3de8-51a8-4310-aebe-a7a729a4a56d","Type":"ContainerDied","Data":"489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e"} Mar 19 18:00:27 crc kubenswrapper[4792]: I0319 18:00:27.759416 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69pp5" event={"ID":"f64f95ae-e111-4279-ae68-f1f62146ab86","Type":"ContainerDied","Data":"2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666"} Mar 19 18:00:28 crc kubenswrapper[4792]: I0319 18:00:28.760405 4792 generic.go:334] "Generic (PLEG): container finished" podID="274cacda-9f26-4e2a-8f66-6159174913b4" containerID="16ac63c661a32b64c2efdab46ceec4ce680f23db49ec154150cf1830083f1ed1" exitCode=0 Mar 19 18:00:28 crc kubenswrapper[4792]: I0319 18:00:28.760523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565720-27gv7" event={"ID":"274cacda-9f26-4e2a-8f66-6159174913b4","Type":"ContainerDied","Data":"16ac63c661a32b64c2efdab46ceec4ce680f23db49ec154150cf1830083f1ed1"} Mar 19 18:00:28 crc kubenswrapper[4792]: I0319 18:00:28.764879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pbxd" event={"ID":"183c3de8-51a8-4310-aebe-a7a729a4a56d","Type":"ContainerStarted","Data":"20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a"} Mar 19 18:00:28 crc kubenswrapper[4792]: I0319 18:00:28.767493 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69pp5" event={"ID":"f64f95ae-e111-4279-ae68-f1f62146ab86","Type":"ContainerStarted","Data":"a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478"} Mar 19 18:00:28 crc kubenswrapper[4792]: I0319 18:00:28.808152 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9pbxd" podStartSLOduration=10.974747678 podStartE2EDuration="15.808131534s" podCreationTimestamp="2026-03-19 18:00:13 +0000 UTC" firstStartedPulling="2026-03-19 18:00:23.652177998 +0000 UTC m=+4786.798235538" lastFinishedPulling="2026-03-19 18:00:28.485561854 +0000 UTC m=+4791.631619394" observedRunningTime="2026-03-19 18:00:28.796263742 +0000 UTC m=+4791.942321282" watchObservedRunningTime="2026-03-19 18:00:28.808131534 +0000 UTC m=+4791.954189074" Mar 19 18:00:28 crc kubenswrapper[4792]: I0319 18:00:28.823034 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69pp5" podStartSLOduration=21.291656404 podStartE2EDuration="25.823011159s" podCreationTimestamp="2026-03-19 18:00:03 +0000 UTC" firstStartedPulling="2026-03-19 18:00:23.678403459 +0000 UTC m=+4786.824460999" lastFinishedPulling="2026-03-19 18:00:28.209758204 +0000 UTC m=+4791.355815754" observedRunningTime="2026-03-19 18:00:28.819887803 +0000 UTC m=+4791.965945343" watchObservedRunningTime="2026-03-19 18:00:28.823011159 +0000 UTC m=+4791.969068699" Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.357769 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565720-27gv7" Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.480541 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d9cd\" (UniqueName: \"kubernetes.io/projected/274cacda-9f26-4e2a-8f66-6159174913b4-kube-api-access-7d9cd\") pod \"274cacda-9f26-4e2a-8f66-6159174913b4\" (UID: \"274cacda-9f26-4e2a-8f66-6159174913b4\") " Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.495360 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274cacda-9f26-4e2a-8f66-6159174913b4-kube-api-access-7d9cd" (OuterVolumeSpecName: "kube-api-access-7d9cd") pod "274cacda-9f26-4e2a-8f66-6159174913b4" (UID: "274cacda-9f26-4e2a-8f66-6159174913b4"). InnerVolumeSpecName "kube-api-access-7d9cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.584494 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d9cd\" (UniqueName: \"kubernetes.io/projected/274cacda-9f26-4e2a-8f66-6159174913b4-kube-api-access-7d9cd\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.850846 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565720-27gv7" event={"ID":"274cacda-9f26-4e2a-8f66-6159174913b4","Type":"ContainerDied","Data":"9ef3baba84dd72adae4e87f2e274bc5cab210fd53c3cc21e65d76aff0c3f56b0"} Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.851283 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef3baba84dd72adae4e87f2e274bc5cab210fd53c3cc21e65d76aff0c3f56b0" Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.851188 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565720-27gv7" Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.894419 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565714-jzd5w"] Mar 19 18:00:30 crc kubenswrapper[4792]: I0319 18:00:30.914162 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565714-jzd5w"] Mar 19 18:00:31 crc kubenswrapper[4792]: I0319 18:00:31.752836 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6be0703-6716-4513-927a-a61ddb46e519" path="/var/lib/kubelet/pods/b6be0703-6716-4513-927a-a61ddb46e519/volumes" Mar 19 18:00:33 crc kubenswrapper[4792]: I0319 18:00:33.494665 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:33 crc kubenswrapper[4792]: I0319 18:00:33.495317 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:33 crc kubenswrapper[4792]: I0319 18:00:33.576132 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:33 crc kubenswrapper[4792]: I0319 18:00:33.947323 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:34 crc kubenswrapper[4792]: I0319 18:00:34.286102 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:34 crc kubenswrapper[4792]: I0319 18:00:34.286137 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:34 crc kubenswrapper[4792]: I0319 18:00:34.818961 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pbxd"] Mar 19 18:00:35 crc kubenswrapper[4792]: I0319 18:00:35.338652 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-69pp5" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="registry-server" probeResult="failure" output=< Mar 19 18:00:35 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:00:35 crc kubenswrapper[4792]: > Mar 19 18:00:35 crc kubenswrapper[4792]: I0319 18:00:35.918025 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d29d0577-d9f9-4402-a79d-06557b2f2826","Type":"ContainerStarted","Data":"8e29a40dfe09cb121e3da4b2c5c6eb6653bc283d5d747da772efc7c941d61019"} Mar 19 18:00:35 crc kubenswrapper[4792]: I0319 18:00:35.918202 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9pbxd" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerName="registry-server" containerID="cri-o://20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a" gracePeriod=2 Mar 19 18:00:35 crc kubenswrapper[4792]: I0319 18:00:35.947565 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.610023112 podStartE2EDuration="51.947547396s" podCreationTimestamp="2026-03-19 17:59:44 +0000 UTC" firstStartedPulling="2026-03-19 17:59:46.920269072 +0000 UTC m=+4750.066326612" lastFinishedPulling="2026-03-19 18:00:34.257793356 +0000 UTC m=+4797.403850896" observedRunningTime="2026-03-19 18:00:35.936814975 +0000 UTC m=+4799.082872515" watchObservedRunningTime="2026-03-19 18:00:35.947547396 +0000 UTC m=+4799.093604936" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.463341 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.559468 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-catalog-content\") pod \"183c3de8-51a8-4310-aebe-a7a729a4a56d\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.559621 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw75g\" (UniqueName: \"kubernetes.io/projected/183c3de8-51a8-4310-aebe-a7a729a4a56d-kube-api-access-pw75g\") pod \"183c3de8-51a8-4310-aebe-a7a729a4a56d\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.559885 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-utilities\") pod \"183c3de8-51a8-4310-aebe-a7a729a4a56d\" (UID: \"183c3de8-51a8-4310-aebe-a7a729a4a56d\") " Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.560725 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-utilities" (OuterVolumeSpecName: "utilities") pod "183c3de8-51a8-4310-aebe-a7a729a4a56d" (UID: "183c3de8-51a8-4310-aebe-a7a729a4a56d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.567625 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183c3de8-51a8-4310-aebe-a7a729a4a56d-kube-api-access-pw75g" (OuterVolumeSpecName: "kube-api-access-pw75g") pod "183c3de8-51a8-4310-aebe-a7a729a4a56d" (UID: "183c3de8-51a8-4310-aebe-a7a729a4a56d"). InnerVolumeSpecName "kube-api-access-pw75g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.585678 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "183c3de8-51a8-4310-aebe-a7a729a4a56d" (UID: "183c3de8-51a8-4310-aebe-a7a729a4a56d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.662719 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw75g\" (UniqueName: \"kubernetes.io/projected/183c3de8-51a8-4310-aebe-a7a729a4a56d-kube-api-access-pw75g\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.662751 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.662760 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183c3de8-51a8-4310-aebe-a7a729a4a56d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.929691 4792 generic.go:334] "Generic (PLEG): container finished" podID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerID="20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a" exitCode=0 Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.930795 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pbxd" event={"ID":"183c3de8-51a8-4310-aebe-a7a729a4a56d","Type":"ContainerDied","Data":"20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a"} Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.930932 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pbxd" event={"ID":"183c3de8-51a8-4310-aebe-a7a729a4a56d","Type":"ContainerDied","Data":"63f149b961990c3daf8e9d9154192cea78865a6fedadfe587bb5d0ffe0455592"} Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.931014 4792 scope.go:117] "RemoveContainer" containerID="20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.931210 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pbxd" Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.984438 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pbxd"] Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.991273 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pbxd"] Mar 19 18:00:36 crc kubenswrapper[4792]: I0319 18:00:36.996753 4792 scope.go:117] "RemoveContainer" containerID="489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e" Mar 19 18:00:37 crc kubenswrapper[4792]: I0319 18:00:37.033581 4792 scope.go:117] "RemoveContainer" containerID="fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908" Mar 19 18:00:37 crc kubenswrapper[4792]: I0319 18:00:37.084455 4792 scope.go:117] "RemoveContainer" containerID="20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a" Mar 19 18:00:37 crc kubenswrapper[4792]: E0319 18:00:37.084937 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a\": container with ID starting with 20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a not found: ID does not exist" containerID="20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a" Mar 19 18:00:37 crc kubenswrapper[4792]: I0319 18:00:37.084983 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a"} err="failed to get container status \"20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a\": rpc error: code = NotFound desc = could not find container \"20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a\": container with ID starting with 20f01eeace00a62da49eed605ed2a134ea007218cd8bd032d9040fe0ea55b26a not found: ID does not exist" Mar 19 18:00:37 crc kubenswrapper[4792]: I0319 18:00:37.085010 4792 scope.go:117] "RemoveContainer" containerID="489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e" Mar 19 18:00:37 crc kubenswrapper[4792]: E0319 18:00:37.085394 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e\": container with ID starting with 489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e not found: ID does not exist" containerID="489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e" Mar 19 18:00:37 crc kubenswrapper[4792]: I0319 18:00:37.085425 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e"} err="failed to get container status \"489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e\": rpc error: code = NotFound desc = could not find container \"489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e\": container with ID starting with 489b92f710206752821ced8a0e7c6ee4b3b04ea2ef33b9129875163593c3899e not found: ID does not exist" Mar 19 18:00:37 crc kubenswrapper[4792]: I0319 18:00:37.085446 4792 scope.go:117] "RemoveContainer" containerID="fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908" Mar 19 18:00:37 crc kubenswrapper[4792]: E0319 18:00:37.085812 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908\": container with ID starting with fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908 not found: ID does not exist" containerID="fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908" Mar 19 18:00:37 crc kubenswrapper[4792]: I0319 18:00:37.085843 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908"} err="failed to get container status \"fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908\": rpc error: code = NotFound desc = could not find container \"fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908\": container with ID starting with fcb8b98e61f311f70938e31969990ad21298ba355b6157668f9be22ddff71908 not found: ID does not exist" Mar 19 18:00:37 crc kubenswrapper[4792]: I0319 18:00:37.762474 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" path="/var/lib/kubelet/pods/183c3de8-51a8-4310-aebe-a7a729a4a56d/volumes" Mar 19 18:00:44 crc kubenswrapper[4792]: I0319 18:00:44.338650 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:44 crc kubenswrapper[4792]: I0319 18:00:44.415459 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:44 crc kubenswrapper[4792]: I0319 18:00:44.586020 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69pp5"] Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.042035 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-69pp5" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="registry-server" containerID="cri-o://a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478" gracePeriod=2 Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.589710 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.722066 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-catalog-content\") pod \"f64f95ae-e111-4279-ae68-f1f62146ab86\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.722368 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-utilities\") pod \"f64f95ae-e111-4279-ae68-f1f62146ab86\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.722440 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp2rc\" (UniqueName: \"kubernetes.io/projected/f64f95ae-e111-4279-ae68-f1f62146ab86-kube-api-access-fp2rc\") pod \"f64f95ae-e111-4279-ae68-f1f62146ab86\" (UID: \"f64f95ae-e111-4279-ae68-f1f62146ab86\") " Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.724093 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-utilities" (OuterVolumeSpecName: "utilities") pod "f64f95ae-e111-4279-ae68-f1f62146ab86" (UID: "f64f95ae-e111-4279-ae68-f1f62146ab86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.728157 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64f95ae-e111-4279-ae68-f1f62146ab86-kube-api-access-fp2rc" (OuterVolumeSpecName: "kube-api-access-fp2rc") pod "f64f95ae-e111-4279-ae68-f1f62146ab86" (UID: "f64f95ae-e111-4279-ae68-f1f62146ab86"). InnerVolumeSpecName "kube-api-access-fp2rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.779295 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f64f95ae-e111-4279-ae68-f1f62146ab86" (UID: "f64f95ae-e111-4279-ae68-f1f62146ab86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.826363 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp2rc\" (UniqueName: \"kubernetes.io/projected/f64f95ae-e111-4279-ae68-f1f62146ab86-kube-api-access-fp2rc\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.826402 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:46 crc kubenswrapper[4792]: I0319 18:00:46.826414 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f95ae-e111-4279-ae68-f1f62146ab86-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.055762 4792 generic.go:334] "Generic (PLEG): container finished" podID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerID="a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478" exitCode=0 Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.055830 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69pp5" event={"ID":"f64f95ae-e111-4279-ae68-f1f62146ab86","Type":"ContainerDied","Data":"a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478"} Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.055889 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69pp5" event={"ID":"f64f95ae-e111-4279-ae68-f1f62146ab86","Type":"ContainerDied","Data":"ef3d3de614d4e584e221423b6f55e30072b1cb0e5489dc6e15f6db761b61c660"} Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.055911 4792 scope.go:117] "RemoveContainer" containerID="a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.055912 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69pp5" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.087867 4792 scope.go:117] "RemoveContainer" containerID="2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.093362 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69pp5"] Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.107202 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-69pp5"] Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.116352 4792 scope.go:117] "RemoveContainer" containerID="a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.162656 4792 scope.go:117] "RemoveContainer" containerID="a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478" Mar 19 18:00:47 crc kubenswrapper[4792]: E0319 18:00:47.163182 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478\": container with ID starting with a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478 not found: ID does not exist" containerID="a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.163222 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478"} err="failed to get container status \"a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478\": rpc error: code = NotFound desc = could not find container \"a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478\": container with ID starting with a47cd258f8d67db92d4e8cfb2c29afa433c1621a6372874e4153d9a28b612478 not found: ID does not exist" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.163245 4792 scope.go:117] "RemoveContainer" containerID="2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666" Mar 19 18:00:47 crc kubenswrapper[4792]: E0319 18:00:47.163959 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666\": container with ID starting with 2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666 not found: ID does not exist" containerID="2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.163982 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666"} err="failed to get container status \"2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666\": rpc error: code = NotFound desc = could not find container \"2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666\": container with ID starting with 2d4b2bfaa8d72fc409dd233dfe5b8702f803c8f2858ef5e54f122c19d3338666 not found: ID does not exist" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.163996 4792 scope.go:117] "RemoveContainer" containerID="a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c" Mar 19 18:00:47 crc kubenswrapper[4792]: E0319 18:00:47.164279 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c\": container with ID starting with a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c not found: ID does not exist" containerID="a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.164318 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c"} err="failed to get container status \"a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c\": rpc error: code = NotFound desc = could not find container \"a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c\": container with ID starting with a6edf2b45113237f5ce1a6661ffc1006acdad96ea9127eae69024154ca0a9e7c not found: ID does not exist" Mar 19 18:00:47 crc kubenswrapper[4792]: I0319 18:00:47.754390 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" path="/var/lib/kubelet/pods/f64f95ae-e111-4279-ae68-f1f62146ab86/volumes" Mar 19 18:00:50 crc kubenswrapper[4792]: I0319 18:00:50.233186 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:00:50 crc kubenswrapper[4792]: I0319 18:00:50.233745 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.174700 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29565721-7r4xx"] Mar 19 18:01:00 crc kubenswrapper[4792]: E0319 18:01:00.175995 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerName="registry-server" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176014 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerName="registry-server" Mar 19 18:01:00 crc kubenswrapper[4792]: E0319 18:01:00.176024 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="registry-server" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176030 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="registry-server" Mar 19 18:01:00 crc kubenswrapper[4792]: E0319 18:01:00.176063 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerName="extract-content" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176071 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerName="extract-content" Mar 19 18:01:00 crc kubenswrapper[4792]: E0319 18:01:00.176099 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="extract-utilities" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176105 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="extract-utilities" Mar 19 18:01:00 crc kubenswrapper[4792]: E0319 18:01:00.176116 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerName="extract-utilities" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176121 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerName="extract-utilities" Mar 19 18:01:00 crc kubenswrapper[4792]: E0319 18:01:00.176132 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a807992e-e927-4fe1-9529-f840bbc96f02" containerName="collect-profiles" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176137 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a807992e-e927-4fe1-9529-f840bbc96f02" containerName="collect-profiles" Mar 19 18:01:00 crc kubenswrapper[4792]: E0319 18:01:00.176151 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274cacda-9f26-4e2a-8f66-6159174913b4" containerName="oc" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176156 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="274cacda-9f26-4e2a-8f66-6159174913b4" containerName="oc" Mar 19 18:01:00 crc kubenswrapper[4792]: E0319 18:01:00.176169 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="extract-content" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176175 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="extract-content" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176398 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64f95ae-e111-4279-ae68-f1f62146ab86" containerName="registry-server" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176415 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a807992e-e927-4fe1-9529-f840bbc96f02" containerName="collect-profiles" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176436 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="274cacda-9f26-4e2a-8f66-6159174913b4" containerName="oc" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.176445 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="183c3de8-51a8-4310-aebe-a7a729a4a56d" containerName="registry-server" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.177287 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.180777 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-combined-ca-bundle\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.180891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-fernet-keys\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.181003 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-config-data\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.181039 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcnp\" (UniqueName: \"kubernetes.io/projected/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-kube-api-access-kfcnp\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.193547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29565721-7r4xx"] Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.284704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-config-data\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.284813 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfcnp\" (UniqueName: \"kubernetes.io/projected/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-kube-api-access-kfcnp\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.284938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-combined-ca-bundle\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.285170 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-fernet-keys\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.293159 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-combined-ca-bundle\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.293646 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-fernet-keys\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.293786 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-config-data\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.305990 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfcnp\" (UniqueName: \"kubernetes.io/projected/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-kube-api-access-kfcnp\") pod \"keystone-cron-29565721-7r4xx\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.515751 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:00 crc kubenswrapper[4792]: I0319 18:01:00.849177 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29565721-7r4xx"] Mar 19 18:01:01 crc kubenswrapper[4792]: I0319 18:01:01.231105 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565721-7r4xx" event={"ID":"3d3d5772-f179-4d7b-bbdd-5e6d7a276777","Type":"ContainerStarted","Data":"2ee47d36c4da5a8f7cf53203550f7c1bf2e46480f0b2b20884a9bf96b22cf981"} Mar 19 18:01:01 crc kubenswrapper[4792]: I0319 18:01:01.231158 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565721-7r4xx" event={"ID":"3d3d5772-f179-4d7b-bbdd-5e6d7a276777","Type":"ContainerStarted","Data":"a58c1b20ab1773621fa33f9436a5b77e8827fd4a2124bcf25d353f2f1e2e4734"} Mar 19 18:01:01 crc kubenswrapper[4792]: I0319 18:01:01.266783 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29565721-7r4xx" podStartSLOduration=1.266757812 podStartE2EDuration="1.266757812s" podCreationTimestamp="2026-03-19 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 18:01:01.253063391 +0000 UTC m=+4824.399120941" watchObservedRunningTime="2026-03-19 18:01:01.266757812 +0000 UTC m=+4824.412815352" Mar 19 18:01:05 crc kubenswrapper[4792]: I0319 18:01:05.284183 4792 generic.go:334] "Generic (PLEG): container finished" podID="3d3d5772-f179-4d7b-bbdd-5e6d7a276777" containerID="2ee47d36c4da5a8f7cf53203550f7c1bf2e46480f0b2b20884a9bf96b22cf981" exitCode=0 Mar 19 18:01:05 crc kubenswrapper[4792]: I0319 18:01:05.284257 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565721-7r4xx" event={"ID":"3d3d5772-f179-4d7b-bbdd-5e6d7a276777","Type":"ContainerDied","Data":"2ee47d36c4da5a8f7cf53203550f7c1bf2e46480f0b2b20884a9bf96b22cf981"} Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.801519 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.916916 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfcnp\" (UniqueName: \"kubernetes.io/projected/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-kube-api-access-kfcnp\") pod \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.917390 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-combined-ca-bundle\") pod \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.917536 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-fernet-keys\") pod \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.918281 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-config-data\") pod \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\" (UID: \"3d3d5772-f179-4d7b-bbdd-5e6d7a276777\") " Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.927944 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3d3d5772-f179-4d7b-bbdd-5e6d7a276777" (UID: "3d3d5772-f179-4d7b-bbdd-5e6d7a276777"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.932052 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-kube-api-access-kfcnp" (OuterVolumeSpecName: "kube-api-access-kfcnp") pod "3d3d5772-f179-4d7b-bbdd-5e6d7a276777" (UID: "3d3d5772-f179-4d7b-bbdd-5e6d7a276777"). InnerVolumeSpecName "kube-api-access-kfcnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.970959 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d3d5772-f179-4d7b-bbdd-5e6d7a276777" (UID: "3d3d5772-f179-4d7b-bbdd-5e6d7a276777"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 18:01:06 crc kubenswrapper[4792]: I0319 18:01:06.990548 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-config-data" (OuterVolumeSpecName: "config-data") pod "3d3d5772-f179-4d7b-bbdd-5e6d7a276777" (UID: "3d3d5772-f179-4d7b-bbdd-5e6d7a276777"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 18:01:07 crc kubenswrapper[4792]: I0319 18:01:07.021343 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfcnp\" (UniqueName: \"kubernetes.io/projected/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-kube-api-access-kfcnp\") on node \"crc\" DevicePath \"\"" Mar 19 18:01:07 crc kubenswrapper[4792]: I0319 18:01:07.021384 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 19 18:01:07 crc kubenswrapper[4792]: I0319 18:01:07.021393 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 19 18:01:07 crc kubenswrapper[4792]: I0319 18:01:07.021401 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d5772-f179-4d7b-bbdd-5e6d7a276777-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 18:01:07 crc kubenswrapper[4792]: I0319 18:01:07.305110 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565721-7r4xx" event={"ID":"3d3d5772-f179-4d7b-bbdd-5e6d7a276777","Type":"ContainerDied","Data":"a58c1b20ab1773621fa33f9436a5b77e8827fd4a2124bcf25d353f2f1e2e4734"} Mar 19 18:01:07 crc kubenswrapper[4792]: I0319 18:01:07.305476 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58c1b20ab1773621fa33f9436a5b77e8827fd4a2124bcf25d353f2f1e2e4734" Mar 19 18:01:07 crc kubenswrapper[4792]: I0319 18:01:07.305165 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565721-7r4xx" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.318860 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8dv94"] Mar 19 18:01:08 crc kubenswrapper[4792]: E0319 18:01:08.319653 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3d5772-f179-4d7b-bbdd-5e6d7a276777" containerName="keystone-cron" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.319666 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3d5772-f179-4d7b-bbdd-5e6d7a276777" containerName="keystone-cron" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.319901 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3d5772-f179-4d7b-bbdd-5e6d7a276777" containerName="keystone-cron" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.323125 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.336096 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dv94"] Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.471758 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-utilities\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.471936 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-catalog-content\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.471998 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4sz\" (UniqueName: \"kubernetes.io/projected/b550a284-5a60-4772-a518-0beec88de1ba-kube-api-access-wk4sz\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.573942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-utilities\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.574433 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-catalog-content\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.574527 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4sz\" (UniqueName: \"kubernetes.io/projected/b550a284-5a60-4772-a518-0beec88de1ba-kube-api-access-wk4sz\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.575375 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-utilities\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.575678 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-catalog-content\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.597944 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4sz\" (UniqueName: \"kubernetes.io/projected/b550a284-5a60-4772-a518-0beec88de1ba-kube-api-access-wk4sz\") pod \"certified-operators-8dv94\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:08 crc kubenswrapper[4792]: I0319 18:01:08.699397 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:09 crc kubenswrapper[4792]: I0319 18:01:09.274258 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dv94"] Mar 19 18:01:09 crc kubenswrapper[4792]: W0319 18:01:09.283012 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb550a284_5a60_4772_a518_0beec88de1ba.slice/crio-3a83aa4be91326f7f35a656e9264bdd990956107282c91f512450b3a6b6ec435 WatchSource:0}: Error finding container 3a83aa4be91326f7f35a656e9264bdd990956107282c91f512450b3a6b6ec435: Status 404 returned error can't find the container with id 3a83aa4be91326f7f35a656e9264bdd990956107282c91f512450b3a6b6ec435 Mar 19 18:01:09 crc kubenswrapper[4792]: I0319 18:01:09.330247 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dv94" event={"ID":"b550a284-5a60-4772-a518-0beec88de1ba","Type":"ContainerStarted","Data":"3a83aa4be91326f7f35a656e9264bdd990956107282c91f512450b3a6b6ec435"} Mar 19 18:01:09 crc kubenswrapper[4792]: I0319 18:01:09.897723 4792 scope.go:117] "RemoveContainer" containerID="4eb0576f2b997de30f144966f0a45605b87393168cc194929e64c9d17aded43f" Mar 19 18:01:09 crc kubenswrapper[4792]: I0319 18:01:09.961728 4792 scope.go:117] "RemoveContainer" containerID="84e91a5ec03452475a4e1b84540581ce33541d9b0d7c56c8dab7edce3b2daa6e" Mar 19 18:01:10 crc kubenswrapper[4792]: I0319 18:01:10.341957 4792 generic.go:334] "Generic (PLEG): container finished" podID="b550a284-5a60-4772-a518-0beec88de1ba" containerID="022e62e761cb4e0181760b816006deab7b48b82c5ddc92d09f49b34b99fefbe6" exitCode=0 Mar 19 18:01:10 crc kubenswrapper[4792]: I0319 18:01:10.342005 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dv94" event={"ID":"b550a284-5a60-4772-a518-0beec88de1ba","Type":"ContainerDied","Data":"022e62e761cb4e0181760b816006deab7b48b82c5ddc92d09f49b34b99fefbe6"} Mar 19 18:01:12 crc kubenswrapper[4792]: I0319 18:01:12.363533 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dv94" event={"ID":"b550a284-5a60-4772-a518-0beec88de1ba","Type":"ContainerStarted","Data":"4bc3d940fb7ed7afb508e46876dbde0bdfc44bdaa8dd7f7c7c843dc52842e694"} Mar 19 18:01:15 crc kubenswrapper[4792]: I0319 18:01:15.390257 4792 generic.go:334] "Generic (PLEG): container finished" podID="b550a284-5a60-4772-a518-0beec88de1ba" containerID="4bc3d940fb7ed7afb508e46876dbde0bdfc44bdaa8dd7f7c7c843dc52842e694" exitCode=0 Mar 19 18:01:15 crc kubenswrapper[4792]: I0319 18:01:15.390540 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dv94" event={"ID":"b550a284-5a60-4772-a518-0beec88de1ba","Type":"ContainerDied","Data":"4bc3d940fb7ed7afb508e46876dbde0bdfc44bdaa8dd7f7c7c843dc52842e694"} Mar 19 18:01:16 crc kubenswrapper[4792]: I0319 18:01:16.403113 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dv94" event={"ID":"b550a284-5a60-4772-a518-0beec88de1ba","Type":"ContainerStarted","Data":"b220d650719d4981fd5a76fbf757ba5a1921cc99fc81b8fb12a8391c07b39a74"} Mar 19 18:01:16 crc kubenswrapper[4792]: I0319 18:01:16.435326 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8dv94" podStartSLOduration=2.687644957 podStartE2EDuration="8.435287193s" podCreationTimestamp="2026-03-19 18:01:08 +0000 UTC" firstStartedPulling="2026-03-19 18:01:10.344699073 +0000 UTC m=+4833.490756613" lastFinishedPulling="2026-03-19 18:01:16.092341309 +0000 UTC m=+4839.238398849" observedRunningTime="2026-03-19 18:01:16.424042568 +0000 UTC m=+4839.570100108" watchObservedRunningTime="2026-03-19 18:01:16.435287193 +0000 UTC m=+4839.581344743" Mar 19 18:01:18 crc kubenswrapper[4792]: I0319 18:01:18.700884 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:18 crc kubenswrapper[4792]: I0319 18:01:18.701226 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:01:19 crc kubenswrapper[4792]: I0319 18:01:19.768330 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:19 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:19 crc kubenswrapper[4792]: > Mar 19 18:01:20 crc kubenswrapper[4792]: I0319 18:01:20.230948 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:01:20 crc kubenswrapper[4792]: I0319 18:01:20.231580 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:01:29 crc kubenswrapper[4792]: I0319 18:01:29.817650 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:29 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:29 crc kubenswrapper[4792]: > Mar 19 18:01:33 crc kubenswrapper[4792]: I0319 18:01:33.245943 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" podUID="e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:33 crc kubenswrapper[4792]: I0319 18:01:33.323029 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" podUID="ae024059-6924-482c-88b6-c845e6932026" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:33 crc kubenswrapper[4792]: I0319 18:01:33.323133 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" podUID="29107ce9-41d6-410b-b256-723555fd6169" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:33 crc kubenswrapper[4792]: I0319 18:01:33.680154 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:33 crc kubenswrapper[4792]: I0319 18:01:33.682646 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:33 crc kubenswrapper[4792]: I0319 18:01:33.722033 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:33 crc kubenswrapper[4792]: I0319 18:01:33.722106 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:34 crc kubenswrapper[4792]: I0319 18:01:34.110051 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" podUID="8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:34 crc kubenswrapper[4792]: I0319 18:01:34.584301 4792 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:34 crc kubenswrapper[4792]: I0319 18:01:34.584691 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:34 crc kubenswrapper[4792]: I0319 18:01:34.789086 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:34 crc kubenswrapper[4792]: I0319 18:01:34.811887 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:34 crc kubenswrapper[4792]: I0319 18:01:34.811890 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:35 crc kubenswrapper[4792]: I0319 18:01:35.030260 4792 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-sjth6 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:35 crc kubenswrapper[4792]: I0319 18:01:35.030328 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" podUID="9d86fdf3-73d9-48f7-b44f-6182252fc4f8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:35 crc kubenswrapper[4792]: I0319 18:01:35.810997 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:35 crc kubenswrapper[4792]: I0319 18:01:35.811337 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.010344 4792 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-lmw24 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.010413 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" podUID="54c15722-d849-4290-bf53-39c4383912e4" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.242020 4792 patch_prober.go:28] interesting pod/oauth-openshift-65556786d7-stv4d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.242020 4792 patch_prober.go:28] interesting pod/oauth-openshift-65556786d7-stv4d container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.242092 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.242081 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.433787 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.433890 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.518030 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded" start-of-body= Mar 19 18:01:36 crc kubenswrapper[4792]: I0319 18:01:36.518127 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded" Mar 19 18:01:37 crc kubenswrapper[4792]: I0319 18:01:37.434429 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:37 crc kubenswrapper[4792]: I0319 18:01:37.434803 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:37 crc kubenswrapper[4792]: I0319 18:01:37.434954 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:37 crc kubenswrapper[4792]: I0319 18:01:37.435026 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:37 crc kubenswrapper[4792]: I0319 18:01:37.570107 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:37 crc kubenswrapper[4792]: I0319 18:01:37.570124 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:37 crc kubenswrapper[4792]: I0319 18:01:37.794516 4792 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:37 crc kubenswrapper[4792]: I0319 18:01:37.795014 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.127386 4792 trace.go:236] Trace[550607966]: "Calculate volume metrics of wal for pod openshift-logging/logging-loki-ingester-0" (19-Mar-2026 18:01:36.828) (total time: 1282ms): Mar 19 18:01:38 crc kubenswrapper[4792]: Trace[550607966]: [1.282676341s] [1.282676341s] END Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.280218 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.280279 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.280228 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.280448 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.286586 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.286623 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.286639 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:38 crc kubenswrapper[4792]: I0319 18:01:38.286694 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:39 crc kubenswrapper[4792]: I0319 18:01:39.030470 4792 patch_prober.go:28] interesting pod/thanos-querier-87649d4fc-vf7hh container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:39 crc kubenswrapper[4792]: I0319 18:01:39.030823 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" podUID="e647560e-f7fe-4bb2-bf05-80a88cf1c66a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:39 crc kubenswrapper[4792]: I0319 18:01:39.495026 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vswr4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:39 crc kubenswrapper[4792]: I0319 18:01:39.495026 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vswr4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:39 crc kubenswrapper[4792]: I0319 18:01:39.495107 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" podUID="a9918a46-a0e8-400e-bd0c-0af4b0d05339" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:39 crc kubenswrapper[4792]: I0319 18:01:39.495107 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" podUID="a9918a46-a0e8-400e-bd0c-0af4b0d05339" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:39 crc kubenswrapper[4792]: I0319 18:01:39.824792 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-mmsmp" podUID="ae053ba9-b3d6-427d-b0e4-88e11ef2ba71" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 19 18:01:40 crc kubenswrapper[4792]: I0319 18:01:40.376012 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:40 crc kubenswrapper[4792]: I0319 18:01:40.376070 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:40 crc kubenswrapper[4792]: I0319 18:01:40.376153 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:40 crc kubenswrapper[4792]: I0319 18:01:40.376214 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:40 crc kubenswrapper[4792]: I0319 18:01:40.810585 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:01:40 crc kubenswrapper[4792]: I0319 18:01:40.812970 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.315103 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" podUID="29961080-94d4-4275-8d1a-baf1405cf2bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.362019 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" podUID="335bce01-df52-41ca-b47a-daa5e8ac917e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.433929 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.434296 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.459459 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" podUID="bce0486f-f235-464e-acd7-bc8da076eebe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.519784 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.519867 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.561038 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" podUID="b7f6258a-2ce1-482c-84ee-e869f191cb69" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.648017 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" podUID="d14a657c-5e70-4847-9b07-f85ce53d7757" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.774093 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" podUID="74eec49e-2c05-49ce-874b-654ec80018e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.774170 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" podUID="ca8f4495-eabc-425f-82dd-f3c5329de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.813730 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="05b1938b-461b-46fe-9fb9-28e17c7591bc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.856108 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.856171 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.856191 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" podUID="d89e09ff-441b-491e-98f7-9bf618322505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.898060 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:41 crc kubenswrapper[4792]: I0319 18:01:41.898118 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.138272 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" podUID="91a44cfc-5acd-4b7c-814c-1521b5e2b85d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.138264 4792 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-cfgxg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.138373 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" podUID="6430b947-6329-4e68-9cb4-6e08ee058f70" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.180038 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" podUID="23c3a809-9d7c-4d60-be1f-2fbc1583e5d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.180089 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.180362 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.180117 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.180396 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.183145 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.183185 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.183547 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.183573 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.195906 4792 patch_prober.go:28] interesting pod/metrics-server-856df7d6cf-zntpc container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.195958 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" podUID="70963d0d-d9ae-4a3c-a2c7-8e05a90cd337" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.196144 4792 patch_prober.go:28] interesting pod/metrics-server-856df7d6cf-zntpc container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.196164 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" podUID="70963d0d-d9ae-4a3c-a2c7-8e05a90cd337" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.271187 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" podUID="1ca9378b-68d2-4281-b45a-7f40c30bae7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.312188 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" podUID="2f5d3346-4746-45e3-a73e-3d94d586e34d" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.320662 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:42 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:42 crc kubenswrapper[4792]: > Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.321969 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:42 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:42 crc kubenswrapper[4792]: > Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.322058 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:42 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:42 crc kubenswrapper[4792]: > Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.324614 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:42 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:42 crc kubenswrapper[4792]: > Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.325497 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:42 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:42 crc kubenswrapper[4792]: > Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.429101 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.429169 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.429511 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.429572 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.453706 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.453674 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.453779 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.453832 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.599080 4792 patch_prober.go:28] interesting pod/loki-operator-controller-manager-795c7b44df-ssttv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.599455 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" podUID="1d900a68-83bb-40f6-8841-556f80c6ac78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.690292 4792 patch_prober.go:28] interesting pod/monitoring-plugin-5748767799-dwqlm container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.690371 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" podUID="33bb9632-c429-4194-91fe-698d60a4933a" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.785277 4792 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9q2vd container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.785285 4792 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9q2vd container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.785335 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" podUID="a446d1fe-6ebb-425a-8b70-b3225da28873" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:42 crc kubenswrapper[4792]: I0319 18:01:42.785378 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" podUID="a446d1fe-6ebb-425a-8b70-b3225da28873" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.219101 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" podUID="e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.400561 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" podUID="ae024059-6924-482c-88b6-c845e6932026" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.400685 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" podUID="29107ce9-41d6-410b-b256-723555fd6169" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.400725 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" podUID="ae024059-6924-482c-88b6-c845e6932026" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.401067 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" podUID="29107ce9-41d6-410b-b256-723555fd6169" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.538081 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podUID="4b613458-1b90-42f8-8d32-d3017f189770" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.538087 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podUID="4b613458-1b90-42f8-8d32-d3017f189770" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.720005 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.720075 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.720077 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:43 crc kubenswrapper[4792]: I0319 18:01:43.720129 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.027153 4792 patch_prober.go:28] interesting pod/thanos-querier-87649d4fc-vf7hh container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.027974 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" podUID="e647560e-f7fe-4bb2-bf05-80a88cf1c66a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.057101 4792 patch_prober.go:28] interesting pod/perses-operator-5b64d67795-hhzt7 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.057158 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" podUID="3477a59c-705b-42e9-bf3e-6ec92fecfc9e" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.152031 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" podUID="8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.152031 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" podUID="8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.273056 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podUID="30ef8aea-daf2-4351-bf36-a8238738129a" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.273204 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podUID="30ef8aea-daf2-4351-bf36-a8238738129a" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.582240 4792 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.582329 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.709892 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:44 crc kubenswrapper[4792]: timeout: health rpc did not complete within 1s Mar 19 18:01:44 crc kubenswrapper[4792]: > Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.725984 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:44 crc kubenswrapper[4792]: timeout: health rpc did not complete within 1s Mar 19 18:01:44 crc kubenswrapper[4792]: > Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.726094 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:44 crc kubenswrapper[4792]: timeout: health rpc did not complete within 1s Mar 19 18:01:44 crc kubenswrapper[4792]: > Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.726131 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:44 crc kubenswrapper[4792]: timeout: health rpc did not complete within 1s Mar 19 18:01:44 crc kubenswrapper[4792]: > Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.774015 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-gdvnw" podUID="69f67eea-c8b3-40a4-891a-4c15c31cb410" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.774126 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-gdvnw" podUID="69f67eea-c8b3-40a4-891a-4c15c31cb410" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.809910 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.811673 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-mmsmp" podUID="ae053ba9-b3d6-427d-b0e4-88e11ef2ba71" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.811680 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.897169 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.897225 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.897263 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.987561 4792 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-sjth6 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:44 crc kubenswrapper[4792]: I0319 18:01:44.987647 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" podUID="9d86fdf3-73d9-48f7-b44f-6182252fc4f8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:45 crc kubenswrapper[4792]: I0319 18:01:45.810789 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:45 crc kubenswrapper[4792]: I0319 18:01:45.810989 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:45 crc kubenswrapper[4792]: I0319 18:01:45.813255 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:01:45 crc kubenswrapper[4792]: I0319 18:01:45.813495 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:01:45 crc kubenswrapper[4792]: I0319 18:01:45.872013 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6cld2" podUID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:45 crc kubenswrapper[4792]: I0319 18:01:45.872059 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6cld2" podUID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.010476 4792 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-lmw24 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.010567 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" podUID="54c15722-d849-4290-bf53-39c4383912e4" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.240626 4792 patch_prober.go:28] interesting pod/oauth-openshift-65556786d7-stv4d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.240995 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.241528 4792 patch_prober.go:28] interesting pod/oauth-openshift-65556786d7-stv4d container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.241689 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.321694 4792 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-z95d6 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.322095 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" podUID="03d0f2d0-18de-48b9-ba57-85e09753dccf" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.438490 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.439002 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.438540 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.439117 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.473673 4792 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-ljg58 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.473746 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" podUID="78b39436-d594-47d8-9e75-8470495398ac" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.518612 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.518687 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.518753 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.518697 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.626909 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:46 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:46 crc kubenswrapper[4792]: > Mar 19 18:01:46 crc kubenswrapper[4792]: I0319 18:01:46.628589 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:46 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:46 crc kubenswrapper[4792]: > Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.009900 4792 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-lmw24 container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.009973 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" podUID="54c15722-d849-4290-bf53-39c4383912e4" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.209089 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.209165 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b90cdc46-8fb4-424e-be18-e675309acdff" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.322402 4792 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-z95d6 container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.322478 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" podUID="03d0f2d0-18de-48b9-ba57-85e09753dccf" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.434201 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.434239 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.434258 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.434275 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.439571 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.439635 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.439714 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.439740 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.508789 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/gateway namespace/openshift-logging: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={ Mar 19 18:01:47 crc kubenswrapper[4792]: "http": "Get \"https://localhost:8080\": context deadline exceeded" Mar 19 18:01:47 crc kubenswrapper[4792]: } Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.508866 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.559091 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.559121 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.559186 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.559207 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.795398 4792 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.795468 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:47 crc kubenswrapper[4792]: I0319 18:01:47.815003 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="05b1938b-461b-46fe-9fb9-28e17c7591bc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 19 18:01:48 crc kubenswrapper[4792]: I0319 18:01:48.281104 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:48 crc kubenswrapper[4792]: I0319 18:01:48.281451 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:48 crc kubenswrapper[4792]: I0319 18:01:48.281171 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:48 crc kubenswrapper[4792]: I0319 18:01:48.281561 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:48 crc kubenswrapper[4792]: I0319 18:01:48.286315 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:48 crc kubenswrapper[4792]: I0319 18:01:48.286371 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:48 crc kubenswrapper[4792]: I0319 18:01:48.286314 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:48 crc kubenswrapper[4792]: I0319 18:01:48.286445 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:49 crc kubenswrapper[4792]: I0319 18:01:49.024546 4792 patch_prober.go:28] interesting pod/thanos-querier-87649d4fc-vf7hh container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:49 crc kubenswrapper[4792]: I0319 18:01:49.024944 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" podUID="e647560e-f7fe-4bb2-bf05-80a88cf1c66a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:49 crc kubenswrapper[4792]: I0319 18:01:49.024647 4792 patch_prober.go:28] interesting pod/thanos-querier-87649d4fc-vf7hh container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:49 crc kubenswrapper[4792]: I0319 18:01:49.025018 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" podUID="e647560e-f7fe-4bb2-bf05-80a88cf1c66a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:49 crc kubenswrapper[4792]: I0319 18:01:49.293707 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="94c78995-4f1f-4eca-a3fb-df83caafa647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.183:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:49 crc kubenswrapper[4792]: I0319 18:01:49.293753 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="94c78995-4f1f-4eca-a3fb-df83caafa647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.183:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:49 crc kubenswrapper[4792]: I0319 18:01:49.736850 4792 patch_prober.go:28] interesting pod/console-8656c6c5d8-kzwmx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:49 crc kubenswrapper[4792]: I0319 18:01:49.737545 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-8656c6c5d8-kzwmx" podUID="9100a499-798c-4e58-815d-030f63f25740" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:50 crc kubenswrapper[4792]: I0319 18:01:50.231595 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:01:50 crc kubenswrapper[4792]: I0319 18:01:50.231666 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:01:50 crc kubenswrapper[4792]: I0319 18:01:50.233448 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 18:01:50 crc kubenswrapper[4792]: I0319 18:01:50.237274 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72469a44f2a722113f67c35613f06445f8eb914775e86b2980ab0a82d9718925"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 18:01:50 crc kubenswrapper[4792]: I0319 18:01:50.237383 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://72469a44f2a722113f67c35613f06445f8eb914775e86b2980ab0a82d9718925" gracePeriod=600 Mar 19 18:01:50 crc kubenswrapper[4792]: I0319 18:01:50.819624 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:01:50 crc kubenswrapper[4792]: I0319 18:01:50.819789 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:01:50 crc kubenswrapper[4792]: I0319 18:01:50.820380 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.308305 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" podUID="c82a8813-bf57-4e7c-88fb-34b0ebee51be" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.391124 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" podUID="c82a8813-bf57-4e7c-88fb-34b0ebee51be" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.391087 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" podUID="29961080-94d4-4275-8d1a-baf1405cf2bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.433601 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded" start-of-body= Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.433660 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.433739 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": context deadline exceeded" start-of-body= Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.433835 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": context deadline exceeded" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.473035 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" podUID="29961080-94d4-4275-8d1a-baf1405cf2bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.473064 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" podUID="335bce01-df52-41ca-b47a-daa5e8ac917e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.518528 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.518616 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.518526 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.518748 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.556042 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" podUID="bce0486f-f235-464e-acd7-bc8da076eebe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.556160 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" podUID="335bce01-df52-41ca-b47a-daa5e8ac917e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.556425 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" podUID="bce0486f-f235-464e-acd7-bc8da076eebe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.618536 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" podUID="b7f6258a-2ce1-482c-84ee-e869f191cb69" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.702038 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" podUID="b7f6258a-2ce1-482c-84ee-e869f191cb69" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.702050 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" podUID="80afdbc0-ff4c-4806-884d-ef3542b4de9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.785093 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" podUID="d14a657c-5e70-4847-9b07-f85ce53d7757" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.836252 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="72469a44f2a722113f67c35613f06445f8eb914775e86b2980ab0a82d9718925" exitCode=0 Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.836315 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"72469a44f2a722113f67c35613f06445f8eb914775e86b2980ab0a82d9718925"} Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.839942 4792 scope.go:117] "RemoveContainer" containerID="ad4a51920ae6b17ca4f6cef1c6dbb748a76fad6bb1c5c877dd79a7620d68aa77" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.867036 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" podUID="ca8f4495-eabc-425f-82dd-f3c5329de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.945382 4792 trace.go:236] Trace[869571932]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (19-Mar-2026 18:01:46.732) (total time: 5207ms): Mar 19 18:01:51 crc kubenswrapper[4792]: Trace[869571932]: [5.207865704s] [5.207865704s] END Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.949470 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" podUID="74eec49e-2c05-49ce-874b-654ec80018e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:51 crc kubenswrapper[4792]: I0319 18:01:51.949482 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" podUID="80afdbc0-ff4c-4806-884d-ef3542b4de9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.033173 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" podUID="33f808bd-605c-41c7-94fb-92ceab7de0a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.116209 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.116291 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.116585 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" podUID="d14a657c-5e70-4847-9b07-f85ce53d7757" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.118816 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" podUID="ca8f4495-eabc-425f-82dd-f3c5329de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.119389 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" podUID="d89e09ff-441b-491e-98f7-9bf618322505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.161954 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" podUID="6832677c-467f-4786-b2f8-9c999c94f3ba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.203117 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" podUID="6832677c-467f-4786-b2f8-9c999c94f3ba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.203167 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" podUID="d89e09ff-441b-491e-98f7-9bf618322505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.203190 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" podUID="74eec49e-2c05-49ce-874b-654ec80018e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.203320 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" podUID="33f808bd-605c-41c7-94fb-92ceab7de0a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.203353 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.203367 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.285016 4792 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-cfgxg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.285364 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" podUID="6430b947-6329-4e68-9cb4-6e08ee058f70" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.285024 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" podUID="91a44cfc-5acd-4b7c-814c-1521b5e2b85d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368012 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368041 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" podUID="23c3a809-9d7c-4d60-be1f-2fbc1583e5d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368093 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368148 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368162 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368444 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368464 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368494 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.368506 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.410251 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" podUID="1ca9378b-68d2-4281-b45a-7f40c30bae7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.451058 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" podUID="1ca9378b-68d2-4281-b45a-7f40c30bae7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.451058 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" podUID="91a44cfc-5acd-4b7c-814c-1521b5e2b85d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.492064 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" podUID="2f5d3346-4746-45e3-a73e-3d94d586e34d" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533046 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533064 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" podUID="2f5d3346-4746-45e3-a73e-3d94d586e34d" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533099 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533119 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533185 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533264 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" podUID="23c3a809-9d7c-4d60-be1f-2fbc1583e5d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533886 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533917 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533973 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.533997 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.534519 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.534541 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.534746 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-9qk59 container/download-server namespace/openshift-console: Readiness probe status=failure output="" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.534769 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.534807 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.535117 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-9qk59 container/download-server namespace/openshift-console: Liveness probe status=failure output="" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.599044 4792 patch_prober.go:28] interesting pod/loki-operator-controller-manager-795c7b44df-ssttv container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.599102 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" podUID="1d900a68-83bb-40f6-8841-556f80c6ac78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.640056 4792 patch_prober.go:28] interesting pod/loki-operator-controller-manager-795c7b44df-ssttv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.640116 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" podUID="1d900a68-83bb-40f6-8841-556f80c6ac78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.689577 4792 patch_prober.go:28] interesting pod/monitoring-plugin-5748767799-dwqlm container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.689648 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" podUID="33bb9632-c429-4194-91fe-698d60a4933a" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.821933 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.825042 4792 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9q2vd container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.825103 4792 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9q2vd container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.825177 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" podUID="a446d1fe-6ebb-425a-8b70-b3225da28873" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.825103 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" podUID="a446d1fe-6ebb-425a-8b70-b3225da28873" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:52 crc kubenswrapper[4792]: I0319 18:01:52.846966 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db"} Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.221164 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" podUID="e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.222009 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.317066 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" podUID="ae024059-6924-482c-88b6-c845e6932026" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.317107 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" podUID="29107ce9-41d6-410b-b256-723555fd6169" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.317187 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.317224 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.536043 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podUID="4b613458-1b90-42f8-8d32-d3017f189770" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.536060 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podUID="4b613458-1b90-42f8-8d32-d3017f189770" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.723126 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.723183 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.723281 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.725120 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.726208 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.726295 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.727053 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"fd188ec05c07ed602a7a49a17e83601a9d8d17b36b4bc5f3638428c58d0da6ae"} pod="openshift-operators/observability-operator-6dd7dd855f-625pf" containerMessage="Container operator failed liveness probe, will be restarted" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.727089 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" containerID="cri-o://fd188ec05c07ed602a7a49a17e83601a9d8d17b36b4bc5f3638428c58d0da6ae" gracePeriod=30 Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.831819 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="05b1938b-461b-46fe-9fb9-28e17c7591bc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.831912 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.831819 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.834416 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"5af0d393bca190b78fa50c881d4fbcfbcad66edf3876c95ba3eafa8a09d61bc3"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 19 18:01:53 crc kubenswrapper[4792]: I0319 18:01:53.834536 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05b1938b-461b-46fe-9fb9-28e17c7591bc" containerName="ceilometer-central-agent" containerID="cri-o://5af0d393bca190b78fa50c881d4fbcfbcad66edf3876c95ba3eafa8a09d61bc3" gracePeriod=30 Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.098046 4792 patch_prober.go:28] interesting pod/perses-operator-5b64d67795-hhzt7 container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.098109 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" podUID="3477a59c-705b-42e9-bf3e-6ec92fecfc9e" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.098182 4792 patch_prober.go:28] interesting pod/thanos-querier-87649d4fc-vf7hh container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.098198 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" podUID="e647560e-f7fe-4bb2-bf05-80a88cf1c66a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.098354 4792 patch_prober.go:28] interesting pod/perses-operator-5b64d67795-hhzt7 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.098443 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" podUID="3477a59c-705b-42e9-bf3e-6ec92fecfc9e" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.140250 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" podUID="8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.140367 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.188966 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podUID="30ef8aea-daf2-4351-bf36-a8238738129a" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.273064 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" podUID="e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.273136 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podUID="30ef8aea-daf2-4351-bf36-a8238738129a" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.358068 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" podUID="ae024059-6924-482c-88b6-c845e6932026" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.399125 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" podUID="29107ce9-41d6-410b-b256-723555fd6169" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.525172 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:54 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:54 crc kubenswrapper[4792]: > Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.525172 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:54 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:54 crc kubenswrapper[4792]: > Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.525450 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:54 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:54 crc kubenswrapper[4792]: > Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.526245 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:54 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:54 crc kubenswrapper[4792]: > Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.526452 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:54 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:54 crc kubenswrapper[4792]: > Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.526460 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:54 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:54 crc kubenswrapper[4792]: > Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.528536 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:54 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:54 crc kubenswrapper[4792]: > Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.533681 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" probeResult="failure" output=< Mar 19 18:01:54 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:01:54 crc kubenswrapper[4792]: > Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.583146 4792 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.583214 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.583271 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.585736 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.585860 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8" gracePeriod=30 Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.733514 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-gdvnw" podUID="69f67eea-c8b3-40a4-891a-4c15c31cb410" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.811194 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.811316 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.811816 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.811896 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.826398 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.938021 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.938021 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-gdvnw" podUID="69f67eea-c8b3-40a4-891a-4c15c31cb410" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.938153 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.938172 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.938363 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.938260 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.938419 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-nd7zd" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.940047 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"02eb8fb242dc30e4be78084af24230af6458e148a8624d1b640478ba6aa93114"} pod="metallb-system/frr-k8s-nd7zd" containerMessage="Container frr failed liveness probe, will be restarted" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.940138 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="frr" containerID="cri-o://02eb8fb242dc30e4be78084af24230af6458e148a8624d1b640478ba6aa93114" gracePeriod=2 Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.987620 4792 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-sjth6 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.987680 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" podUID="9d86fdf3-73d9-48f7-b44f-6182252fc4f8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:54 crc kubenswrapper[4792]: I0319 18:01:54.987761 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.182105 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" podUID="8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.292064 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.292109 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.292193 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.292134 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.810279 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.810976 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.811042 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.811065 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.811140 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.811523 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-mmsmp" podUID="ae053ba9-b3d6-427d-b0e4-88e11ef2ba71" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.821924 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"f256c98fb2d8568bfe54d6a492050c3f0b90acc15b08924c19543f88117420f9"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.873216 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6cld2" podUID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.873233 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6cld2" podUID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.988698 4792 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-sjth6 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:55 crc kubenswrapper[4792]: I0319 18:01:55.988780 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" podUID="9d86fdf3-73d9-48f7-b44f-6182252fc4f8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.010126 4792 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-lmw24 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.010188 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" podUID="54c15722-d849-4290-bf53-39c4383912e4" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.010281 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.241375 4792 patch_prober.go:28] interesting pod/oauth-openshift-65556786d7-stv4d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.241730 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.241422 4792 patch_prober.go:28] interesting pod/oauth-openshift-65556786d7-stv4d container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.241825 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.241864 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.241932 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.243207 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"57686d57dced50c52b9d5d3436c0604f76a4afec7189e37e0899020bfe2dc486"} pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.322274 4792 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-z95d6 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.322330 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" podUID="03d0f2d0-18de-48b9-ba57-85e09753dccf" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.433494 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.433771 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.433818 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.433996 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.474012 4792 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-ljg58 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.474079 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" podUID="78b39436-d594-47d8-9e75-8470495398ac" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.518394 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.518472 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.518800 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.519058 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.794916 4792 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.795351 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.795544 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.811449 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.815135 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.815212 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.901821 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerDied","Data":"02eb8fb242dc30e4be78084af24230af6458e148a8624d1b640478ba6aa93114"} Mar 19 18:01:56 crc kubenswrapper[4792]: I0319 18:01:56.903605 4792 generic.go:334] "Generic (PLEG): container finished" podID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerID="02eb8fb242dc30e4be78084af24230af6458e148a8624d1b640478ba6aa93114" exitCode=143 Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.011050 4792 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-lmw24 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.011119 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" podUID="54c15722-d849-4290-bf53-39c4383912e4" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.207728 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.207784 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b90cdc46-8fb4-424e-be18-e675309acdff" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.243710 4792 patch_prober.go:28] interesting pod/oauth-openshift-65556786d7-stv4d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.244067 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.322825 4792 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.322913 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="d42fa7f9-ea92-480c-8de6-cf0b6b9219e6" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.434143 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.434197 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.434238 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.434257 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.434335 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.434419 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.436274 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"c488690842c3cea3e79b5fd60ec46f8e5bfc0a752f44c113987a29df8199fe59"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.436319 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" containerID="cri-o://c488690842c3cea3e79b5fd60ec46f8e5bfc0a752f44c113987a29df8199fe59" gracePeriod=30 Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.437524 4792 trace.go:236] Trace[603069102]: "Calculate volume metrics of storage for pod minio-dev/minio" (19-Mar-2026 18:01:53.641) (total time: 3791ms): Mar 19 18:01:57 crc kubenswrapper[4792]: Trace[603069102]: [3.791012615s] [3.791012615s] END Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.437525 4792 trace.go:236] Trace[1922418114]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (19-Mar-2026 18:01:56.165) (total time: 1267ms): Mar 19 18:01:57 crc kubenswrapper[4792]: Trace[1922418114]: [1.267388504s] [1.267388504s] END Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.513197 4792 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.513266 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="312a9ea1-8c2b-4b68-a4c2-55869981692e" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.557014 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.557092 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.557426 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.558622 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cert-manager-webhook" containerStatusID={"Type":"cri-o","ID":"716f0a0c682956b86df34501b8ac23fec8aac85d02ebee1be5f9ac81bdbae970"} pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" containerMessage="Container cert-manager-webhook failed liveness probe, will be restarted" Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.558656 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" containerID="cri-o://716f0a0c682956b86df34501b8ac23fec8aac85d02ebee1be5f9ac81bdbae970" gracePeriod=30 Mar 19 18:01:57 crc kubenswrapper[4792]: I0319 18:01:57.929173 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"70d09c2c2e38f4e6b0a2bc733b282a21946c61e8e588683901e561677e4be351"} Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.281278 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.281313 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.281492 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.281438 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.281542 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.282953 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"46a9f60d6a4266af70cd825c3c38a2ff12759f05154eebe0e4d2afc81f0ead8c"} pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.282990 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" containerID="cri-o://46a9f60d6a4266af70cd825c3c38a2ff12759f05154eebe0e4d2afc81f0ead8c" gracePeriod=30 Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.286005 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.286033 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.286058 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.286089 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.286122 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.286937 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"7416e76ed8d24a2ee7ad89edf653f9bc75af88b082e61ccc27b8a92bf90841a6"} pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" containerMessage="Container controller-manager failed liveness probe, will be restarted" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.286969 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" containerID="cri-o://7416e76ed8d24a2ee7ad89edf653f9bc75af88b082e61ccc27b8a92bf90841a6" gracePeriod=30 Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.337981 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.338038 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.338082 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.339273 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.339401 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.339450 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"bbf99bcf3f1a102ffda62028210cde474da248eaba75dd048f3b8d64a3411cd2"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.339607 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" containerID="cri-o://bbf99bcf3f1a102ffda62028210cde474da248eaba75dd048f3b8d64a3411cd2" gracePeriod=30 Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.339632 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.435299 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.435376 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.443274 4792 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-prfmr container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.71:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.443314 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" podUID="7c6f611e-37c6-424d-9c46-32a92c5ac3b7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.71:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.443333 4792 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-prfmr container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.71:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.443368 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-prfmr" podUID="7c6f611e-37c6-424d-9c46-32a92c5ac3b7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.71:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:58 crc kubenswrapper[4792]: I0319 18:01:58.747943 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nd7zd" Mar 19 18:01:58 crc kubenswrapper[4792]: E0319 18:01:58.804152 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T18:01:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T18:01:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T18:01:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T18:01:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.024800 4792 patch_prober.go:28] interesting pod/thanos-querier-87649d4fc-vf7hh container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.024888 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-87649d4fc-vf7hh" podUID="e647560e-f7fe-4bb2-bf05-80a88cf1c66a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.294071 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="94c78995-4f1f-4eca-a3fb-df83caafa647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.183:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.294144 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="94c78995-4f1f-4eca-a3fb-df83caafa647" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.183:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.451564 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vswr4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.451651 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" podUID="a9918a46-a0e8-400e-bd0c-0af4b0d05339" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.451572 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vswr4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.451768 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vswr4" podUID="a9918a46-a0e8-400e-bd0c-0af4b0d05339" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.735345 4792 patch_prober.go:28] interesting pod/console-8656c6c5d8-kzwmx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.735409 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-8656c6c5d8-kzwmx" podUID="9100a499-798c-4e58-815d-030f63f25740" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.788028 4792 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:01:59 crc kubenswrapper[4792]: I0319 18:01:59.810944 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-mmsmp" podUID="ae053ba9-b3d6-427d-b0e4-88e11ef2ba71" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 19 18:02:00 crc kubenswrapper[4792]: I0319 18:02:00.319569 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="3c37ff21-a32e-4b93-9292-3648b8cc3a8e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.25:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:00 crc kubenswrapper[4792]: I0319 18:02:00.319652 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="3c37ff21-a32e-4b93-9292-3648b8cc3a8e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.25:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:00 crc kubenswrapper[4792]: I0319 18:02:00.828623 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:02:00 crc kubenswrapper[4792]: I0319 18:02:00.831327 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:02:00 crc kubenswrapper[4792]: I0319 18:02:00.960325 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" event={"ID":"bf8a2335-56a0-4c34-ac01-e93578bf4cbd","Type":"ContainerDied","Data":"716f0a0c682956b86df34501b8ac23fec8aac85d02ebee1be5f9ac81bdbae970"} Mar 19 18:02:00 crc kubenswrapper[4792]: I0319 18:02:00.960758 4792 generic.go:334] "Generic (PLEG): container finished" podID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerID="716f0a0c682956b86df34501b8ac23fec8aac85d02ebee1be5f9ac81bdbae970" exitCode=0 Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.193010 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-2487f" podUID="9bb5702e-9617-4fb3-a13b-32aa8f7209bc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.237012 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-rd29l" podUID="a1ed7ec7-1763-4593-a115-448e7da65482" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.293101 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-cn88d" podUID="c82a8813-bf57-4e7c-88fb-34b0ebee51be" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.376111 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.376196 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.417059 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" podUID="29961080-94d4-4275-8d1a-baf1405cf2bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.417229 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.417775 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" podUID="335bce01-df52-41ca-b47a-daa5e8ac917e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.417968 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.435197 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.435267 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.459004 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" podUID="bce0486f-f235-464e-acd7-bc8da076eebe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.459099 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.459204 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.459235 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.475313 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": dial tcp 10.217.0.45:6080: connect: connection refused" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.519077 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.519582 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.519088 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.519675 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.560059 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" podUID="b7f6258a-2ce1-482c-84ee-e869f191cb69" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.560443 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.673080 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" podUID="d14a657c-5e70-4847-9b07-f85ce53d7757" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.673211 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.715019 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" podUID="ca8f4495-eabc-425f-82dd-f3c5329de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.715070 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-s2pjr" podUID="80afdbc0-ff4c-4806-884d-ef3542b4de9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.715124 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.797070 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" podUID="74eec49e-2c05-49ce-874b-654ec80018e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.797182 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.815461 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="05b1938b-461b-46fe-9fb9-28e17c7591bc" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.838060 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-dz5pk" podUID="33f808bd-605c-41c7-94fb-92ceab7de0a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.921267 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.921333 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.921417 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.921428 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" podUID="d89e09ff-441b-491e-98f7-9bf618322505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.921660 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.962065 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-884679f54-7xldx" podUID="e4f68cf5-d501-4468-a9a4-b959ae49db87" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.962481 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-8272z" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.962489 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.962570 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.962608 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.972699 4792 generic.go:334] "Generic (PLEG): container finished" podID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerID="c488690842c3cea3e79b5fd60ec46f8e5bfc0a752f44c113987a29df8199fe59" exitCode=0 Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.973011 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" event={"ID":"89a3cb59-c0fe-426a-beb3-bf0d77ba0530","Type":"ContainerDied","Data":"c488690842c3cea3e79b5fd60ec46f8e5bfc0a752f44c113987a29df8199fe59"} Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.973988 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d1424878d070d2583a51787f3c1f3f6ec5d880eded73c60a6232d450ebf66415"} pod="openshift-ingress/router-default-5444994796-6k44w" containerMessage="Container router failed liveness probe, will be restarted" Mar 19 18:02:01 crc kubenswrapper[4792]: I0319 18:02:01.974030 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" containerID="cri-o://d1424878d070d2583a51787f3c1f3f6ec5d880eded73c60a6232d450ebf66415" gracePeriod=10 Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.003243 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gkg4f" podUID="6832677c-467f-4786-b2f8-9c999c94f3ba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.079071 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-c674c5965-p4npr" podUID="2dceb468-ce3f-4650-ae5e-694664ffb360" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.139039 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" podUID="91a44cfc-5acd-4b7c-814c-1521b5e2b85d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.139175 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.139290 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.139293 4792 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-cfgxg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.139350 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" podUID="6430b947-6329-4e68-9cb4-6e08ee058f70" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.139405 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.140203 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"b48a0a86781cdbf0151171134b9249d236066cb74cd31d022029151e2d40553e"} pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.140241 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" podUID="6430b947-6329-4e68-9cb4-6e08ee058f70" containerName="authentication-operator" containerID="cri-o://b48a0a86781cdbf0151171134b9249d236066cb74cd31d022029151e2d40553e" gracePeriod=30 Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.182978 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" podUID="23c3a809-9d7c-4d60-be1f-2fbc1583e5d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183002 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183055 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183070 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183112 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183143 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183224 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183354 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183374 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183404 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183416 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183806 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183860 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.183882 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.184878 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"11e485c31354616747eaecd1f143ecee5fb729fd819611c2c682454b9488c12b"} pod="openshift-console-operator/console-operator-58897d9998-xkgg2" containerMessage="Container console-operator failed liveness probe, will be restarted" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.184921 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" containerID="cri-o://11e485c31354616747eaecd1f143ecee5fb729fd819611c2c682454b9488c12b" gracePeriod=30 Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.185392 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"42ddb4c03055c27ed6d572924fb639305690a2cad78a583ce733459b626c8fda"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" containerMessage="Container olm-operator failed liveness probe, will be restarted" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.185433 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" containerID="cri-o://42ddb4c03055c27ed6d572924fb639305690a2cad78a583ce733459b626c8fda" gracePeriod=30 Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.194638 4792 patch_prober.go:28] interesting pod/metrics-server-856df7d6cf-zntpc container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.194704 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" podUID="70963d0d-d9ae-4a3c-a2c7-8e05a90cd337" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.194792 4792 patch_prober.go:28] interesting pod/metrics-server-856df7d6cf-zntpc container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.194805 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-856df7d6cf-zntpc" podUID="70963d0d-d9ae-4a3c-a2c7-8e05a90cd337" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.268373 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" podUID="1ca9378b-68d2-4281-b45a-7f40c30bae7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.268741 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.311771 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" podUID="2f5d3346-4746-45e3-a73e-3d94d586e34d" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.311955 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.461999 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-9qk59 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462019 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-9qk59 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462052 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9qk59" podUID="b749c00a-6a69-4782-8018-7e6f759c9575" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462071 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462086 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9qk59" podUID="b749c00a-6a69-4782-8018-7e6f759c9575" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462098 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462109 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462022 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" podUID="335bce01-df52-41ca-b47a-daa5e8ac917e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462123 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462131 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462195 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462317 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462333 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462361 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462375 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462719 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.462757 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.463736 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"d18a2c9d1de6dee35b071bab6c01a888ffb725f1358fb4097efbc5fc4ae06690"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" containerMessage="Container packageserver failed liveness probe, will be restarted" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.463770 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" containerID="cri-o://d18a2c9d1de6dee35b071bab6c01a888ffb725f1358fb4097efbc5fc4ae06690" gracePeriod=30 Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.464038 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"92e84560dec79b1626ba56020f8300728bdd62b674d120abf5cabce801eafeb2"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.464084 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" containerID="cri-o://92e84560dec79b1626ba56020f8300728bdd62b674d120abf5cabce801eafeb2" gracePeriod=30 Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.504120 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" podUID="bce0486f-f235-464e-acd7-bc8da076eebe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.600038 4792 patch_prober.go:28] interesting pod/loki-operator-controller-manager-795c7b44df-ssttv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.600395 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" podUID="1d900a68-83bb-40f6-8841-556f80c6ac78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.600481 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.610030 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-bw2ct" podUID="03c93f52-3a7f-4fbc-921e-79ad74db2d4e" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.715119 4792 patch_prober.go:28] interesting pod/monitoring-plugin-5748767799-dwqlm container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.715189 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" podUID="33bb9632-c429-4194-91fe-698d60a4933a" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.715339 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.716060 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" podUID="d14a657c-5e70-4847-9b07-f85ce53d7757" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.757169 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" podUID="ca8f4495-eabc-425f-82dd-f3c5329de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.757206 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.839109 4792 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9q2vd container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.839160 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" podUID="a446d1fe-6ebb-425a-8b70-b3225da28873" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.839216 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.839484 4792 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9q2vd container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.839512 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" podUID="a446d1fe-6ebb-425a-8b70-b3225da28873" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.839562 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.841726 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"8f0207018e0ce6c6ccafd1f48925fd1698c0a4f44e9773c61787a0a835e0f291"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.841786 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" podUID="a446d1fe-6ebb-425a-8b70-b3225da28873" containerName="package-server-manager" containerID="cri-o://8f0207018e0ce6c6ccafd1f48925fd1698c0a4f44e9773c61787a0a835e0f291" gracePeriod=30 Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.925278 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-mdbhz" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.925418 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-lhq2p" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.961981 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.962038 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.984054 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" event={"ID":"bf8a2335-56a0-4c34-ac01-e93578bf4cbd","Type":"ContainerStarted","Data":"cfcd9f7045f5c2c3d6f7987ec118671065a53316af85ed97529198c615f5b81c"} Mar 19 18:02:02 crc kubenswrapper[4792]: I0319 18:02:02.984190 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.179983 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": EOF" start-of-body= Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.180035 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": EOF" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.180011 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" podUID="91a44cfc-5acd-4b7c-814c-1521b5e2b85d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.222165 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" podUID="e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.263011 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.263073 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.263035 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" podUID="23c3a809-9d7c-4d60-be1f-2fbc1583e5d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.345066 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" podUID="ae024059-6924-482c-88b6-c845e6932026" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.386028 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" podUID="1ca9378b-68d2-4281-b45a-7f40c30bae7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.386104 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" podUID="ae024059-6924-482c-88b6-c845e6932026" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.427042 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.427115 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.427322 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" podUID="2f5d3346-4746-45e3-a73e-3d94d586e34d" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.537026 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.537035 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podUID="4b613458-1b90-42f8-8d32-d3017f189770" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.537089 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.537105 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podUID="4b613458-1b90-42f8-8d32-d3017f189770" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.537114 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.537200 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.538557 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"bc0eb405a9ef9a4e9d1c483fe644ed3cf4ae09c982591495e93b393fd714dc73"} pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" containerMessage="Container webhook-server failed liveness probe, will be restarted" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.538598 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podUID="4b613458-1b90-42f8-8d32-d3017f189770" containerName="webhook-server" containerID="cri-o://bc0eb405a9ef9a4e9d1c483fe644ed3cf4ae09c982591495e93b393fd714dc73" gracePeriod=2 Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.683398 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.683474 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.683555 4792 patch_prober.go:28] interesting pod/loki-operator-controller-manager-795c7b44df-ssttv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.683629 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" podUID="1d900a68-83bb-40f6-8841-556f80c6ac78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.717368 4792 patch_prober.go:28] interesting pod/monitoring-plugin-5748767799-dwqlm container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:03 crc kubenswrapper[4792]: I0319 18:02:03.717463 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" podUID="33bb9632-c429-4194-91fe-698d60a4933a" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.004268 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.044576 4792 generic.go:334] "Generic (PLEG): container finished" podID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerID="92e84560dec79b1626ba56020f8300728bdd62b674d120abf5cabce801eafeb2" exitCode=0 Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.044934 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" event={"ID":"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7","Type":"ContainerDied","Data":"92e84560dec79b1626ba56020f8300728bdd62b674d120abf5cabce801eafeb2"} Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.045142 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.045199 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.049523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" event={"ID":"89a3cb59-c0fe-426a-beb3-bf0d77ba0530","Type":"ContainerStarted","Data":"5b1d6514637436dc9787460fe2cd5d0c928b7a2347e07cded407772c846cb58c"} Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.049672 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.086564 4792 patch_prober.go:28] interesting pod/perses-operator-5b64d67795-hhzt7 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.086641 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" podUID="3477a59c-705b-42e9-bf3e-6ec92fecfc9e" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.086825 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.087372 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": dial tcp 10.217.0.79:8443: connect: connection refused" start-of-body= Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.087425 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": dial tcp 10.217.0.79:8443: connect: connection refused" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.170297 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" podUID="8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.170544 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" podUID="8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.272037 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podUID="30ef8aea-daf2-4351-bf36-a8238738129a" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.272176 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.272550 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podUID="30ef8aea-daf2-4351-bf36-a8238738129a" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.272619 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.273612 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"abf1058326df618e831edba83d3443b0140840c29518b5c50339ff8946897506"} pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.273651 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podUID="30ef8aea-daf2-4351-bf36-a8238738129a" containerName="frr-k8s-webhook-server" containerID="cri-o://abf1058326df618e831edba83d3443b0140840c29518b5c50339ff8946897506" gracePeriod=10 Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.578050 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" podUID="4b613458-1b90-42f8-8d32-d3017f189770" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.772272 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-gdvnw" podUID="69f67eea-c8b3-40a4-891a-4c15c31cb410" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.772599 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-gdvnw" podUID="69f67eea-c8b3-40a4-891a-4c15c31cb410" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.772625 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.772760 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.774386 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"9084d0decc4897eebd211030b6964c9ee4729368295671d7b1e7852dfcb5911c"} pod="metallb-system/controller-7bb4cc7c98-gdvnw" containerMessage="Container controller failed liveness probe, will be restarted" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.774464 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-7bb4cc7c98-gdvnw" podUID="69f67eea-c8b3-40a4-891a-4c15c31cb410" containerName="controller" containerID="cri-o://9084d0decc4897eebd211030b6964c9ee4729368295671d7b1e7852dfcb5911c" gracePeriod=2 Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.810059 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.868031 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.868206 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nd7zd" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.869053 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.869136 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-nd7zd" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.869179 4792 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.869494 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"07e97d4683994fd5540dacaed43fa860b7b75698934b678d403fa90bb02d62af"} pod="metallb-system/frr-k8s-nd7zd" containerMessage="Container controller failed liveness probe, will be restarted" Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.870151 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-nd7zd" podUID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerName="controller" containerID="cri-o://07e97d4683994fd5540dacaed43fa860b7b75698934b678d403fa90bb02d62af" gracePeriod=2 Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.908641 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:04 crc kubenswrapper[4792]: I0319 18:02:04.908754 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.004770 4792 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-sjth6 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.004884 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" podUID="9d86fdf3-73d9-48f7-b44f-6182252fc4f8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.062541 4792 generic.go:334] "Generic (PLEG): container finished" podID="05b1938b-461b-46fe-9fb9-28e17c7591bc" containerID="5af0d393bca190b78fa50c881d4fbcfbcad66edf3876c95ba3eafa8a09d61bc3" exitCode=0 Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.062631 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b1938b-461b-46fe-9fb9-28e17c7591bc","Type":"ContainerDied","Data":"5af0d393bca190b78fa50c881d4fbcfbcad66edf3876c95ba3eafa8a09d61bc3"} Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.075883 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-xkgg2_c43d7a6a-8816-4471-92f5-32dc458c677f/console-operator/0.log" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.075985 4792 generic.go:334] "Generic (PLEG): container finished" podID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerID="11e485c31354616747eaecd1f143ecee5fb729fd819611c2c682454b9488c12b" exitCode=1 Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.076110 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" event={"ID":"c43d7a6a-8816-4471-92f5-32dc458c677f","Type":"ContainerDied","Data":"11e485c31354616747eaecd1f143ecee5fb729fd819611c2c682454b9488c12b"} Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.077159 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": dial tcp 10.217.0.79:8443: connect: connection refused" start-of-body= Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.077231 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": dial tcp 10.217.0.79:8443: connect: connection refused" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.130041 4792 patch_prober.go:28] interesting pod/perses-operator-5b64d67795-hhzt7 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.130097 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" podUID="3477a59c-705b-42e9-bf3e-6ec92fecfc9e" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.314021 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" podUID="30ef8aea-daf2-4351-bf36-a8238738129a" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.316909 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nd7zd" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.810961 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.811139 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="485f0802-7649-4377-99c0-22f04b2ee5bc" containerName="prometheus" probeResult="failure" output="command timed out" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.811757 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" probeResult="failure" output="command timed out" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.815340 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.815372 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.815418 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.815471 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.815503 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.816609 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"4d4d1636c09e28e298739d7ff2f0be74f0ff340947ff7fb9fb933d125ce5fe9c"} pod="openshift-marketplace/redhat-operators-h7gpk" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.816642 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" containerID="cri-o://4d4d1636c09e28e298739d7ff2f0be74f0ff340947ff7fb9fb933d125ce5fe9c" gracePeriod=30 Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.815472 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.823819 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.823882 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.824937 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"867888f639ec2e37428545bd2bef0d4184089bfea99da537baadc66010fd6636"} pod="openshift-marketplace/redhat-marketplace-5hq59" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.824996 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" containerID="cri-o://867888f639ec2e37428545bd2bef0d4184089bfea99da537baadc66010fd6636" gracePeriod=30 Mar 19 18:02:05 crc kubenswrapper[4792]: E0319 18:02:05.830665 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d4d1636c09e28e298739d7ff2f0be74f0ff340947ff7fb9fb933d125ce5fe9c" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:05 crc kubenswrapper[4792]: E0319 18:02:05.832576 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="867888f639ec2e37428545bd2bef0d4184089bfea99da537baadc66010fd6636" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:05 crc kubenswrapper[4792]: E0319 18:02:05.833954 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d4d1636c09e28e298739d7ff2f0be74f0ff340947ff7fb9fb933d125ce5fe9c" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:05 crc kubenswrapper[4792]: E0319 18:02:05.835337 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="867888f639ec2e37428545bd2bef0d4184089bfea99da537baadc66010fd6636" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:05 crc kubenswrapper[4792]: E0319 18:02:05.835983 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d4d1636c09e28e298739d7ff2f0be74f0ff340947ff7fb9fb933d125ce5fe9c" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:05 crc kubenswrapper[4792]: E0319 18:02:05.836024 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" Mar 19 18:02:05 crc kubenswrapper[4792]: E0319 18:02:05.836392 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="867888f639ec2e37428545bd2bef0d4184089bfea99da537baadc66010fd6636" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:05 crc kubenswrapper[4792]: E0319 18:02:05.836425 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.894226 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6cld2" podUID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.894308 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6cld2" podUID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.894434 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6cld2" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.894948 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-6cld2" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.895886 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"587cdf040aa5503847d573c1f36fd95a324761cea51b0bb7a748561f3c3e0d5e"} pod="metallb-system/speaker-6cld2" containerMessage="Container speaker failed liveness probe, will be restarted" Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.895948 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-6cld2" podUID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerName="speaker" containerID="cri-o://587cdf040aa5503847d573c1f36fd95a324761cea51b0bb7a748561f3c3e0d5e" gracePeriod=2 Mar 19 18:02:05 crc kubenswrapper[4792]: I0319 18:02:05.897792 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.010255 4792 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-lmw24 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.010321 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" podUID="54c15722-d849-4290-bf53-39c4383912e4" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.092626 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-xkgg2_c43d7a6a-8816-4471-92f5-32dc458c677f/console-operator/0.log" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.092698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" event={"ID":"c43d7a6a-8816-4471-92f5-32dc458c677f","Type":"ContainerStarted","Data":"b8545787c717802f507d7be8b0645a97adbd211a363a87c1d7fe42ec9308d5de"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.093060 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.093333 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.093375 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.095593 4792 generic.go:334] "Generic (PLEG): container finished" podID="4b613458-1b90-42f8-8d32-d3017f189770" containerID="bc0eb405a9ef9a4e9d1c483fe644ed3cf4ae09c982591495e93b393fd714dc73" exitCode=137 Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.095666 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" event={"ID":"4b613458-1b90-42f8-8d32-d3017f189770","Type":"ContainerDied","Data":"bc0eb405a9ef9a4e9d1c483fe644ed3cf4ae09c982591495e93b393fd714dc73"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.099772 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" event={"ID":"a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7","Type":"ContainerStarted","Data":"8bd7515c291ff8a4ecf4750b6e6bf9756291d0b1361725815f4b53026b72fdd1"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.100805 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.101103 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.101147 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.117896 4792 generic.go:334] "Generic (PLEG): container finished" podID="81f1b6c9-e921-49a2-8149-767fe360d7d0" containerID="07e97d4683994fd5540dacaed43fa860b7b75698934b678d403fa90bb02d62af" exitCode=0 Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.117980 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerDied","Data":"07e97d4683994fd5540dacaed43fa860b7b75698934b678d403fa90bb02d62af"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.126543 4792 generic.go:334] "Generic (PLEG): container finished" podID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerID="42ddb4c03055c27ed6d572924fb639305690a2cad78a583ce733459b626c8fda" exitCode=0 Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.126626 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" event={"ID":"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4","Type":"ContainerDied","Data":"42ddb4c03055c27ed6d572924fb639305690a2cad78a583ce733459b626c8fda"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.129921 4792 generic.go:334] "Generic (PLEG): container finished" podID="e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76" containerID="a8cc45d614d01b79bc8226ff741d7f200f7dbd01e7ef867245c9c075c8aff53d" exitCode=1 Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.130000 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" event={"ID":"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76","Type":"ContainerDied","Data":"a8cc45d614d01b79bc8226ff741d7f200f7dbd01e7ef867245c9c075c8aff53d"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.131366 4792 generic.go:334] "Generic (PLEG): container finished" podID="30ef8aea-daf2-4351-bf36-a8238738129a" containerID="abf1058326df618e831edba83d3443b0140840c29518b5c50339ff8946897506" exitCode=0 Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.131416 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" event={"ID":"30ef8aea-daf2-4351-bf36-a8238738129a","Type":"ContainerDied","Data":"abf1058326df618e831edba83d3443b0140840c29518b5c50339ff8946897506"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.131865 4792 scope.go:117] "RemoveContainer" containerID="a8cc45d614d01b79bc8226ff741d7f200f7dbd01e7ef867245c9c075c8aff53d" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.133115 4792 generic.go:334] "Generic (PLEG): container finished" podID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerID="d18a2c9d1de6dee35b071bab6c01a888ffb725f1358fb4097efbc5fc4ae06690" exitCode=0 Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.133208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" event={"ID":"2d09edb3-848f-4a5d-bccf-4122850cb7bb","Type":"ContainerDied","Data":"d18a2c9d1de6dee35b071bab6c01a888ffb725f1358fb4097efbc5fc4ae06690"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.134582 4792 generic.go:334] "Generic (PLEG): container finished" podID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerID="fd188ec05c07ed602a7a49a17e83601a9d8d17b36b4bc5f3638428c58d0da6ae" exitCode=0 Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.134607 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" event={"ID":"7f7fc8f3-521e-42a6-95e0-18f42faf92c4","Type":"ContainerDied","Data":"fd188ec05c07ed602a7a49a17e83601a9d8d17b36b4bc5f3638428c58d0da6ae"} Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.242129 4792 patch_prober.go:28] interesting pod/oauth-openshift-65556786d7-stv4d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.242441 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.292331 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.292378 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.321669 4792 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-z95d6 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.321726 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" podUID="03d0f2d0-18de-48b9-ba57-85e09753dccf" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.321813 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.433714 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-vz8rf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.433808 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-vz8rf" podUID="1e5dbe4d-6818-4b0d-a372-b9574882f2ad" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.434188 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": dial tcp 10.217.0.79:8443: connect: connection refused" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.434219 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": dial tcp 10.217.0.79:8443: connect: connection refused" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.434310 4792 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2r6xw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": dial tcp 10.217.0.79:8443: connect: connection refused" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.434329 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" podUID="89a3cb59-c0fe-426a-beb3-bf0d77ba0530" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": dial tcp 10.217.0.79:8443: connect: connection refused" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.474064 4792 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-ljg58 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.474128 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" podUID="78b39436-d594-47d8-9e75-8470495398ac" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.474202 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.518056 4792 patch_prober.go:28] interesting pod/logging-loki-gateway-5bc6c599cb-2gcbl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.518132 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5bc6c599cb-2gcbl" podUID="10c782de-230d-407d-9bb1-2a8a3a8da91c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.796302 4792 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.796681 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.817100 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.817176 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.817348 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.817465 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.826694 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"8a3671f6da4a27ed50f4a23c001de3b5f6eaa70ae7175f2ff5f47bee71109651"} pod="openstack-operators/openstack-operator-index-v9gs9" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.826777 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" containerID="cri-o://8a3671f6da4a27ed50f4a23c001de3b5f6eaa70ae7175f2ff5f47bee71109651" gracePeriod=30 Mar 19 18:02:06 crc kubenswrapper[4792]: I0319 18:02:06.912082 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6cld2" podUID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.027946 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-z95d6" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.031050 4792 trace.go:236] Trace[601168051]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (19-Mar-2026 18:02:02.407) (total time: 4620ms): Mar 19 18:02:07 crc kubenswrapper[4792]: Trace[601168051]: [4.620667473s] [4.620667473s] END Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.043829 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="74993dec-a63b-4856-913e-39ec56f88058" containerName="galera" containerID="cri-o://f256c98fb2d8568bfe54d6a492050c3f0b90acc15b08924c19543f88117420f9" gracePeriod=19 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.060711 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" containerID="cri-o://55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6" gracePeriod=18 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.123604 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-ljg58" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.130725 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:07 crc kubenswrapper[4792]: timeout: health rpc did not complete within 1s Mar 19 18:02:07 crc kubenswrapper[4792]: > Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.130827 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.134671 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:07 crc kubenswrapper[4792]: timeout: health rpc did not complete within 1s Mar 19 18:02:07 crc kubenswrapper[4792]: > Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.134751 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gb64t" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.135681 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:07 crc kubenswrapper[4792]: timeout: health rpc did not complete within 1s Mar 19 18:02:07 crc kubenswrapper[4792]: > Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.135736 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.139766 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:07 crc kubenswrapper[4792]: timeout: health rpc did not complete within 1s Mar 19 18:02:07 crc kubenswrapper[4792]: > Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.139816 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-gb64t" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.163349 4792 generic.go:334] "Generic (PLEG): container finished" podID="69f67eea-c8b3-40a4-891a-4c15c31cb410" containerID="9084d0decc4897eebd211030b6964c9ee4729368295671d7b1e7852dfcb5911c" exitCode=0 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.163416 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-gdvnw" event={"ID":"69f67eea-c8b3-40a4-891a-4c15c31cb410","Type":"ContainerDied","Data":"9084d0decc4897eebd211030b6964c9ee4729368295671d7b1e7852dfcb5911c"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.168157 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" event={"ID":"e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4","Type":"ContainerStarted","Data":"7b6bed957609d94379f00273cf774ffd2cec05013c7b4ef0f802f13202f818cc"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.168754 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.170126 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.170169 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.174457 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" event={"ID":"4b613458-1b90-42f8-8d32-d3017f189770","Type":"ContainerStarted","Data":"dc792df4a0c0821edf377c9e50052ce60e3df65ae79906a5dd0fe0c2e255b72d"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.175755 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.186181 4792 generic.go:334] "Generic (PLEG): container finished" podID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerID="bbf99bcf3f1a102ffda62028210cde474da248eaba75dd048f3b8d64a3411cd2" exitCode=0 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.186485 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" event={"ID":"356468d1-7817-4566-bb80-ca21f4b9ff24","Type":"ContainerDied","Data":"bbf99bcf3f1a102ffda62028210cde474da248eaba75dd048f3b8d64a3411cd2"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.188516 4792 generic.go:334] "Generic (PLEG): container finished" podID="74eec49e-2c05-49ce-874b-654ec80018e6" containerID="d860e960f1fdf5f0b8a0e62056d6b6363e564a8e56fe01c48046f5667be2dc85" exitCode=1 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.188560 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" event={"ID":"74eec49e-2c05-49ce-874b-654ec80018e6","Type":"ContainerDied","Data":"d860e960f1fdf5f0b8a0e62056d6b6363e564a8e56fe01c48046f5667be2dc85"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.189537 4792 scope.go:117] "RemoveContainer" containerID="d860e960f1fdf5f0b8a0e62056d6b6363e564a8e56fe01c48046f5667be2dc85" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.195954 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" event={"ID":"e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76","Type":"ContainerStarted","Data":"aa7cd1df218092834058ca2e3c5bbc846b461c69b9c9e246fa10d158f5d52829"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.196784 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.201020 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" event={"ID":"30ef8aea-daf2-4351-bf36-a8238738129a","Type":"ContainerStarted","Data":"6a3cc7211342ad9fbaa69c80936d2c9015ddfcb8f4fe01ea04ac5a7635922118"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.201233 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.206314 4792 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8" exitCode=0 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.206375 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"cf38cf61bfba8521996109c43044a6c6c24c333a36872a8eb4c56ed078fcddf8"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.209769 4792 generic.go:334] "Generic (PLEG): container finished" podID="b7f6258a-2ce1-482c-84ee-e869f191cb69" containerID="c28aa1dc1d29662f1f56ad32a8d85849eb111f42af8515ead8df7f1b6043a7a0" exitCode=1 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.209908 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" event={"ID":"b7f6258a-2ce1-482c-84ee-e869f191cb69","Type":"ContainerDied","Data":"c28aa1dc1d29662f1f56ad32a8d85849eb111f42af8515ead8df7f1b6043a7a0"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.211562 4792 scope.go:117] "RemoveContainer" containerID="c28aa1dc1d29662f1f56ad32a8d85849eb111f42af8515ead8df7f1b6043a7a0" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.226095 4792 generic.go:334] "Generic (PLEG): container finished" podID="6430b947-6329-4e68-9cb4-6e08ee058f70" containerID="b48a0a86781cdbf0151171134b9249d236066cb74cd31d022029151e2d40553e" exitCode=0 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.226273 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" event={"ID":"6430b947-6329-4e68-9cb4-6e08ee058f70","Type":"ContainerDied","Data":"b48a0a86781cdbf0151171134b9249d236066cb74cd31d022029151e2d40553e"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.226308 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cfgxg" event={"ID":"6430b947-6329-4e68-9cb4-6e08ee058f70","Type":"ContainerStarted","Data":"f1d457a2ee3c02bf2aa6c82772dc8167ebfcdc08811a253e1029ec32b0401c55"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.256523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nd7zd" event={"ID":"81f1b6c9-e921-49a2-8149-767fe360d7d0","Type":"ContainerStarted","Data":"f5d7c917d219f423c62ff8522e2f5ef9ff995bf3123aa6d401ed5e4d96d21bfa"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.257189 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nd7zd" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.260544 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" event={"ID":"2d09edb3-848f-4a5d-bccf-4122850cb7bb","Type":"ContainerStarted","Data":"e0d1f9b1d222ec3dee65636e6bf1c867497686508e1bc48fa87ef98be2eb5b8f"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.260930 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.261226 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.261281 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.266866 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.266974 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05b1938b-461b-46fe-9fb9-28e17c7591bc","Type":"ContainerStarted","Data":"ecbe4e0b9ec4cef6c81140284ae3817b3d473fd565143d172d36f7b83671a0d9"} Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.267391 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.267495 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.268125 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"47145a7d4546a92d249ce3652000fec4d752e011c0b7b0713ab26e8050917311"} pod="openshift-marketplace/certified-operators-8m42q" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.268228 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" containerID="cri-o://47145a7d4546a92d249ce3652000fec4d752e011c0b7b0713ab26e8050917311" gracePeriod=30 Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.268397 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.281093 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.281136 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.438545 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 18:02:07 crc kubenswrapper[4792]: E0319 18:02:07.440242 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47145a7d4546a92d249ce3652000fec4d752e011c0b7b0713ab26e8050917311" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:07 crc kubenswrapper[4792]: E0319 18:02:07.441617 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47145a7d4546a92d249ce3652000fec4d752e011c0b7b0713ab26e8050917311" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:07 crc kubenswrapper[4792]: E0319 18:02:07.443281 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47145a7d4546a92d249ce3652000fec4d752e011c0b7b0713ab26e8050917311" cmd=["grpc_health_probe","-addr=:50051"] Mar 19 18:02:07 crc kubenswrapper[4792]: E0319 18:02:07.443335 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" Mar 19 18:02:07 crc kubenswrapper[4792]: I0319 18:02:07.448881 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gb64t" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.044449 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" probeResult="failure" output="" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.296141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" event={"ID":"b7f6258a-2ce1-482c-84ee-e869f191cb69","Type":"ContainerStarted","Data":"d28b6a5d0981385c3651ab57a1729164206a6232420d7d15497a375e7ae533f2"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.299287 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.301730 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" event={"ID":"356468d1-7817-4566-bb80-ca21f4b9ff24","Type":"ContainerStarted","Data":"9d857bca665d0fff01483f51c036ff0b46d38255e96097918ef8bc4c82f7e475"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.302658 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.304924 4792 generic.go:334] "Generic (PLEG): container finished" podID="a446d1fe-6ebb-425a-8b70-b3225da28873" containerID="8f0207018e0ce6c6ccafd1f48925fd1698c0a4f44e9773c61787a0a835e0f291" exitCode=0 Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.304974 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" event={"ID":"a446d1fe-6ebb-425a-8b70-b3225da28873","Type":"ContainerDied","Data":"8f0207018e0ce6c6ccafd1f48925fd1698c0a4f44e9773c61787a0a835e0f291"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.304989 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" event={"ID":"a446d1fe-6ebb-425a-8b70-b3225da28873","Type":"ContainerStarted","Data":"a764eebbcb1db7e44324a805f52467a9513b9ac53222969720202ccb29157acd"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.305526 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.316073 4792 generic.go:334] "Generic (PLEG): container finished" podID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerID="867888f639ec2e37428545bd2bef0d4184089bfea99da537baadc66010fd6636" exitCode=0 Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.316233 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hq59" event={"ID":"380412c4-57ca-4428-838c-ab93fc6c71cc","Type":"ContainerDied","Data":"867888f639ec2e37428545bd2bef0d4184089bfea99da537baadc66010fd6636"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.328832 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.328911 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.332682 4792 generic.go:334] "Generic (PLEG): container finished" podID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerID="8a3671f6da4a27ed50f4a23c001de3b5f6eaa70ae7175f2ff5f47bee71109651" exitCode=0 Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.332743 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v9gs9" event={"ID":"2d317332-2487-47d0-b052-eb6bd421c0d1","Type":"ContainerDied","Data":"8a3671f6da4a27ed50f4a23c001de3b5f6eaa70ae7175f2ff5f47bee71109651"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.340901 4792 generic.go:334] "Generic (PLEG): container finished" podID="ee375e3b-1376-4cd4-93b7-da4316b203a7" containerID="587cdf040aa5503847d573c1f36fd95a324761cea51b0bb7a748561f3c3e0d5e" exitCode=0 Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.340995 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6cld2" event={"ID":"ee375e3b-1376-4cd4-93b7-da4316b203a7","Type":"ContainerDied","Data":"587cdf040aa5503847d573c1f36fd95a324761cea51b0bb7a748561f3c3e0d5e"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.358283 4792 generic.go:334] "Generic (PLEG): container finished" podID="84114ace-d7fd-41a3-9fa6-87df44501023" containerID="47145a7d4546a92d249ce3652000fec4d752e011c0b7b0713ab26e8050917311" exitCode=0 Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.358402 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m42q" event={"ID":"84114ace-d7fd-41a3-9fa6-87df44501023","Type":"ContainerDied","Data":"47145a7d4546a92d249ce3652000fec4d752e011c0b7b0713ab26e8050917311"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.366756 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-gdvnw" event={"ID":"69f67eea-c8b3-40a4-891a-4c15c31cb410","Type":"ContainerStarted","Data":"88272551f3262537fb11fd0ef9d64e21321402a3a8808b2f17318a7256e52138"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.367126 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.388603 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" event={"ID":"7f7fc8f3-521e-42a6-95e0-18f42faf92c4","Type":"ContainerStarted","Data":"d1b551409e2adfa030d52ca803788d45f5fce77affd34f63665eef9bf8e57827"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.392083 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.392233 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" start-of-body= Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.392270 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.398303 4792 generic.go:334] "Generic (PLEG): container finished" podID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerID="4d4d1636c09e28e298739d7ff2f0be74f0ff340947ff7fb9fb933d125ce5fe9c" exitCode=0 Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.398381 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7gpk" event={"ID":"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0","Type":"ContainerDied","Data":"4d4d1636c09e28e298739d7ff2f0be74f0ff340947ff7fb9fb933d125ce5fe9c"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.402356 4792 generic.go:334] "Generic (PLEG): container finished" podID="359345fa-dd3f-4812-9760-7eb10d601634" containerID="46a9f60d6a4266af70cd825c3c38a2ff12759f05154eebe0e4d2afc81f0ead8c" exitCode=0 Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.402480 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" event={"ID":"359345fa-dd3f-4812-9760-7eb10d601634","Type":"ContainerDied","Data":"46a9f60d6a4266af70cd825c3c38a2ff12759f05154eebe0e4d2afc81f0ead8c"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.422210 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" event={"ID":"74eec49e-2c05-49ce-874b-654ec80018e6","Type":"ContainerStarted","Data":"9d96e1e1c08854750dafb2323539ff8d44776c97a7cd01254beb403992a69752"} Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.422921 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"a697f1c3f685b693a9c48ace845375afce8c47b387ddc693f7405cf593a8311c"} pod="openshift-marketplace/community-operators-gb64t" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.422967 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" containerID="cri-o://a697f1c3f685b693a9c48ace845375afce8c47b387ddc693f7405cf593a8311c" gracePeriod=30 Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.423438 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.423463 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.423504 4792 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" containerID="cri-o://d860e960f1fdf5f0b8a0e62056d6b6363e564a8e56fe01c48046f5667be2dc85" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.423517 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.423661 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.423691 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.434170 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.434452 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 19 18:02:08 crc kubenswrapper[4792]: I0319 18:02:08.959182 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nd7zd" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.468542 4792 generic.go:334] "Generic (PLEG): container finished" podID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerID="a697f1c3f685b693a9c48ace845375afce8c47b387ddc693f7405cf593a8311c" exitCode=0 Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.468642 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb64t" event={"ID":"7a6583ed-1c62-448f-98f6-6055fe84c457","Type":"ContainerDied","Data":"a697f1c3f685b693a9c48ace845375afce8c47b387ddc693f7405cf593a8311c"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.482348 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7gpk" event={"ID":"9faaddd3-77ad-4bc9-97ce-21a824aeb1c0","Type":"ContainerStarted","Data":"3000fa839e3319be9de8655ee90c3fe84441da1a0c00cb32d4fa898d947e7697"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.486350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hq59" event={"ID":"380412c4-57ca-4428-838c-ab93fc6c71cc","Type":"ContainerStarted","Data":"7f40da3193d91a358625d31d760f06d5a19c2c6c834af2c7ea4a7e4c51a3b2c9"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.489799 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v9gs9" event={"ID":"2d317332-2487-47d0-b052-eb6bd421c0d1","Type":"ContainerStarted","Data":"29892c841fa66aa7aa17f0899c77b959fc216ef520c203a8772cc74a56f0113b"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.492428 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" event={"ID":"359345fa-dd3f-4812-9760-7eb10d601634","Type":"ContainerStarted","Data":"b5adbb90f0ef240c5c5f9986f3e4f456f26969f9c7c0af4178ada232e078c97a"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.493399 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.493821 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.493887 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.496458 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"41cac9b2467b42c0ac73308277d704710c97599b9bd10bc308e1a33ce8ecf018"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.496858 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.514268 4792 generic.go:334] "Generic (PLEG): container finished" podID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerID="7416e76ed8d24a2ee7ad89edf653f9bc75af88b082e61ccc27b8a92bf90841a6" exitCode=0 Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.514388 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" event={"ID":"5575e5d6-2fee-4709-8eb9-7b3bff5c7563","Type":"ContainerDied","Data":"7416e76ed8d24a2ee7ad89edf653f9bc75af88b082e61ccc27b8a92bf90841a6"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.514418 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" event={"ID":"5575e5d6-2fee-4709-8eb9-7b3bff5c7563","Type":"ContainerStarted","Data":"91acb173151cb0303b66cbd6891dbf867656981f36c087c5d5379307a6646339"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.515689 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.515944 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.515987 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.534543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6cld2" event={"ID":"ee375e3b-1376-4cd4-93b7-da4316b203a7","Type":"ContainerStarted","Data":"853136bbc23894f7009fa9757a72b3d81e6ba63b49a97abe66bf291163dc157b"} Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.534588 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6cld2" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.536303 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.537024 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" start-of-body= Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.537065 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.914284 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="4d9a5546-9c67-4684-8efd-c6c515dcb25d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 18:02:09 crc kubenswrapper[4792]: I0319 18:02:09.940871 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:09 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:09 crc kubenswrapper[4792]: > Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.321027 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v6tfl" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.419063 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-zkx8w" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.557518 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m42q" event={"ID":"84114ace-d7fd-41a3-9fa6-87df44501023","Type":"ContainerStarted","Data":"0ace39229ac14873f9a5e22297f4c20b9ab1bc0b0bbe52a28c301ea01557701d"} Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.568926 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb64t" event={"ID":"7a6583ed-1c62-448f-98f6-6055fe84c457","Type":"ContainerStarted","Data":"f6694bdb6b555674eb79784520f1dfac1660cc68712b631bb354fed11ada9a27"} Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.569668 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.569709 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.569807 4792 patch_prober.go:28] interesting pod/controller-manager-9c7bf785c-5ptd8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.569861 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" podUID="5575e5d6-2fee-4709-8eb9-7b3bff5c7563" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.569982 4792 patch_prober.go:28] interesting pod/route-controller-manager-65478b57cc-lltk5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.570022 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" podUID="359345fa-dd3f-4812-9760-7eb10d601634" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.570054 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" start-of-body= Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.570075 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.582074 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.582127 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.607521 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-h5w4z" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.705401 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Mar 19 18:02:10 crc kubenswrapper[4792]: [+]has-synced ok Mar 19 18:02:10 crc kubenswrapper[4792]: [-]process-running failed: reason withheld Mar 19 18:02:10 crc kubenswrapper[4792]: healthz check failed Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.705459 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.708277 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b66p7" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.782867 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 18:02:10 crc kubenswrapper[4792]: I0319 18:02:10.782942 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.097710 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-78877dc965-lmkcj" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.132769 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-7sklh" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.167299 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.167353 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-xkgg2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.167388 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.167350 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" podUID="c43d7a6a-8816-4471-92f5-32dc458c677f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.182738 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.182790 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.182796 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zsdng container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.182850 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" podUID="e05a55ab-b7e5-45ce-a692-9e7b0f9d96a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.229693 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-rg6qq" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.262154 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7658474f4d-cpqrx" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.428575 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.428617 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-55nsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.428637 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.428664 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" podUID="2d09edb3-848f-4a5d-bccf-4122850cb7bb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.453992 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.454019 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-25htk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.454048 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.454077 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" podUID="a0a6bdec-1a6d-4b76-a536-2dfdaf7e4ac7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.475098 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" podUID="bf8a2335-56a0-4c34-ac01-e93578bf4cbd" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": dial tcp 10.217.0.45:6080: connect: connection refused" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.559126 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-795c7b44df-ssttv" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.584927 4792 generic.go:334] "Generic (PLEG): container finished" podID="74993dec-a63b-4856-913e-39ec56f88058" containerID="f256c98fb2d8568bfe54d6a492050c3f0b90acc15b08924c19543f88117420f9" exitCode=0 Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.585040 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"74993dec-a63b-4856-913e-39ec56f88058","Type":"ContainerDied","Data":"f256c98fb2d8568bfe54d6a492050c3f0b90acc15b08924c19543f88117420f9"} Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.585088 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"74993dec-a63b-4856-913e-39ec56f88058","Type":"ContainerStarted","Data":"7f3645754cda8277b7aa15044d10d08e1f616fa0ff8d5bf768308b78a0097982"} Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.758557 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.758592 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.857019 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:11 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:11 crc kubenswrapper[4792]: > Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.925423 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:11 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:11 crc kubenswrapper[4792]: > Mar 19 18:02:11 crc kubenswrapper[4792]: I0319 18:02:11.931342 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5748767799-dwqlm" Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.221834 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="4d9a5546-9c67-4684-8efd-c6c515dcb25d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.240794 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-p22vv" Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.300745 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.300806 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.303666 4792 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gvfqb container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.305664 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" podUID="356468d1-7817-4566-bb80-ca21f4b9ff24" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.603132 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-6k44w_2d1ad570-6354-44ba-802c-4860784bf053/router/0.log" Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.603175 4792 generic.go:334] "Generic (PLEG): container finished" podID="2d1ad570-6354-44ba-802c-4860784bf053" containerID="d1424878d070d2583a51787f3c1f3f6ec5d880eded73c60a6232d450ebf66415" exitCode=137 Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.603327 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6k44w" event={"ID":"2d1ad570-6354-44ba-802c-4860784bf053","Type":"ContainerDied","Data":"d1424878d070d2583a51787f3c1f3f6ec5d880eded73c60a6232d450ebf66415"} Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.628408 4792 generic.go:334] "Generic (PLEG): container finished" podID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerID="55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6" exitCode=0 Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.628711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575","Type":"ContainerDied","Data":"55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6"} Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.641216 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" start-of-body= Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.641513 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.641377 4792 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-625pf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" start-of-body= Mar 19 18:02:12 crc kubenswrapper[4792]: I0319 18:02:12.641581 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" podUID="7f7fc8f3-521e-42a6-95e0-18f42faf92c4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": dial tcp 10.217.0.14:8081: connect: connection refused" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.016894 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5b64d67795-hhzt7" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.016932 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gb64t" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.017635 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gb64t" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.076951 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6c7d9f85c5-9nrsb" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.196609 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.198938 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 18:02:13 crc kubenswrapper[4792]: E0319 18:02:13.421091 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6 is running failed: container process not found" containerID="55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 19 18:02:13 crc kubenswrapper[4792]: E0319 18:02:13.421435 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6 is running failed: container process not found" containerID="55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 19 18:02:13 crc kubenswrapper[4792]: E0319 18:02:13.421674 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6 is running failed: container process not found" containerID="55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 19 18:02:13 crc kubenswrapper[4792]: E0319 18:02:13.421699 4792 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55c7cdfbfcdee47c633894d0c441af2fc235efa7a845a8ce0b5ecd08d82693f6 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575" containerName="galera" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.650793 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-6k44w_2d1ad570-6354-44ba-802c-4860784bf053/router/0.log" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.652338 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6k44w" event={"ID":"2d1ad570-6354-44ba-802c-4860784bf053","Type":"ContainerStarted","Data":"425e51868e6c83424a742680507aaa62ab590caf1e3b332bf2a8c8e5208cf4c6"} Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.655674 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575","Type":"ContainerStarted","Data":"eb0aeb26ba82792e20e9720540bfb6a15bcdb123f4f9d4a7aacd8a93781f1ede"} Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.700598 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.701810 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 19 18:02:13 crc kubenswrapper[4792]: I0319 18:02:13.701868 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 19 18:02:14 crc kubenswrapper[4792]: I0319 18:02:14.020628 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-sjth6" Mar 19 18:02:14 crc kubenswrapper[4792]: I0319 18:02:14.021507 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-v9gs9" podUID="2d317332-2487-47d0-b052-eb6bd421c0d1" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:14 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:14 crc kubenswrapper[4792]: > Mar 19 18:02:14 crc kubenswrapper[4792]: I0319 18:02:14.122586 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:14 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:14 crc kubenswrapper[4792]: > Mar 19 18:02:14 crc kubenswrapper[4792]: I0319 18:02:14.277207 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:14 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:14 crc kubenswrapper[4792]: > Mar 19 18:02:14 crc kubenswrapper[4792]: I0319 18:02:14.438755 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 19 18:02:14 crc kubenswrapper[4792]: I0319 18:02:14.439173 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 19 18:02:14 crc kubenswrapper[4792]: I0319 18:02:14.774092 4792 patch_prober.go:28] interesting pod/router-default-5444994796-6k44w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 18:02:14 crc kubenswrapper[4792]: [+]has-synced ok Mar 19 18:02:14 crc kubenswrapper[4792]: [+]process-running ok Mar 19 18:02:14 crc kubenswrapper[4792]: healthz check failed Mar 19 18:02:14 crc kubenswrapper[4792]: I0319 18:02:14.774204 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6k44w" podUID="2d1ad570-6354-44ba-802c-4860784bf053" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 18:02:15 crc kubenswrapper[4792]: I0319 18:02:15.185813 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-9c6b6d984-lmw24" Mar 19 18:02:15 crc kubenswrapper[4792]: I0319 18:02:15.250000 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 18:02:15 crc kubenswrapper[4792]: I0319 18:02:15.711317 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 18:02:15 crc kubenswrapper[4792]: I0319 18:02:15.730886 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 18:02:15 crc kubenswrapper[4792]: I0319 18:02:15.775106 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvfqb" Mar 19 18:02:15 crc kubenswrapper[4792]: I0319 18:02:15.775149 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-6k44w" Mar 19 18:02:15 crc kubenswrapper[4792]: I0319 18:02:15.970539 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565722-4zmxd"] Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.025931 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.029533 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="4d9a5546-9c67-4684-8efd-c6c515dcb25d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.029593 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.032399 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"1f3c7aa25ec1865b44a3c60a18ddad579b22374eb885c5ba2368c7484383bd08"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.032470 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4d9a5546-9c67-4684-8efd-c6c515dcb25d" containerName="cinder-scheduler" containerID="cri-o://1f3c7aa25ec1865b44a3c60a18ddad579b22374eb885c5ba2368c7484383bd08" gracePeriod=30 Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.043283 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.054290 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.057948 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.059979 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565722-4zmxd"] Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.171830 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxb65\" (UniqueName: \"kubernetes.io/projected/582089ff-ec32-4c78-bb81-d650559d9659-kube-api-access-wxb65\") pod \"auto-csr-approver-29565722-4zmxd\" (UID: \"582089ff-ec32-4c78-bb81-d650559d9659\") " pod="openshift-infra/auto-csr-approver-29565722-4zmxd" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.281356 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxb65\" (UniqueName: \"kubernetes.io/projected/582089ff-ec32-4c78-bb81-d650559d9659-kube-api-access-wxb65\") pod \"auto-csr-approver-29565722-4zmxd\" (UID: \"582089ff-ec32-4c78-bb81-d650559d9659\") " pod="openshift-infra/auto-csr-approver-29565722-4zmxd" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.345633 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxb65\" (UniqueName: \"kubernetes.io/projected/582089ff-ec32-4c78-bb81-d650559d9659-kube-api-access-wxb65\") pod \"auto-csr-approver-29565722-4zmxd\" (UID: \"582089ff-ec32-4c78-bb81-d650559d9659\") " pod="openshift-infra/auto-csr-approver-29565722-4zmxd" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.405333 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.457830 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2r6xw" Mar 19 18:02:16 crc kubenswrapper[4792]: I0319 18:02:16.494448 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-bgdjc" Mar 19 18:02:17 crc kubenswrapper[4792]: I0319 18:02:17.285933 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65478b57cc-lltk5" Mar 19 18:02:17 crc kubenswrapper[4792]: I0319 18:02:17.289313 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9c7bf785c-5ptd8" Mar 19 18:02:18 crc kubenswrapper[4792]: I0319 18:02:18.165347 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565722-4zmxd"] Mar 19 18:02:18 crc kubenswrapper[4792]: I0319 18:02:18.814165 4792 generic.go:334] "Generic (PLEG): container finished" podID="4d9a5546-9c67-4684-8efd-c6c515dcb25d" containerID="1f3c7aa25ec1865b44a3c60a18ddad579b22374eb885c5ba2368c7484383bd08" exitCode=0 Mar 19 18:02:18 crc kubenswrapper[4792]: I0319 18:02:18.814279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d9a5546-9c67-4684-8efd-c6c515dcb25d","Type":"ContainerDied","Data":"1f3c7aa25ec1865b44a3c60a18ddad579b22374eb885c5ba2368c7484383bd08"} Mar 19 18:02:18 crc kubenswrapper[4792]: I0319 18:02:18.817864 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" event={"ID":"582089ff-ec32-4c78-bb81-d650559d9659","Type":"ContainerStarted","Data":"540b4e101adc0964e9742dfd2c2578eff03665503150d145a22a498da56201bb"} Mar 19 18:02:19 crc kubenswrapper[4792]: I0319 18:02:19.773079 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:19 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:19 crc kubenswrapper[4792]: > Mar 19 18:02:19 crc kubenswrapper[4792]: I0319 18:02:19.896637 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 19 18:02:20 crc kubenswrapper[4792]: I0319 18:02:20.066834 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 19 18:02:20 crc kubenswrapper[4792]: I0319 18:02:20.521434 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-lkhgd" Mar 19 18:02:20 crc kubenswrapper[4792]: I0319 18:02:20.659734 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-mt22x" Mar 19 18:02:20 crc kubenswrapper[4792]: I0319 18:02:20.847964 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4d9a5546-9c67-4684-8efd-c6c515dcb25d","Type":"ContainerStarted","Data":"33a3579dadf2f4d2e1a49155202f95b8c82bdc8a7037f62854e68f6a7e4a2314"} Mar 19 18:02:20 crc kubenswrapper[4792]: I0319 18:02:20.854817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" event={"ID":"582089ff-ec32-4c78-bb81-d650559d9659","Type":"ContainerStarted","Data":"21b14d90ba1210bbdea112cea9ea38a3a20d86cc0ae8b6b007e8e9a5938a896a"} Mar 19 18:02:20 crc kubenswrapper[4792]: I0319 18:02:20.899276 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" podStartSLOduration=13.948023008 podStartE2EDuration="14.899258681s" podCreationTimestamp="2026-03-19 18:02:06 +0000 UTC" firstStartedPulling="2026-03-19 18:02:18.17814253 +0000 UTC m=+4901.324200070" lastFinishedPulling="2026-03-19 18:02:19.129378203 +0000 UTC m=+4902.275435743" observedRunningTime="2026-03-19 18:02:20.894894152 +0000 UTC m=+4904.040951692" watchObservedRunningTime="2026-03-19 18:02:20.899258681 +0000 UTC m=+4904.045316221" Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.169016 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xkgg2" Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.185327 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zsdng" Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.432139 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-55nsz" Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.456620 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-25htk" Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.530022 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" podUID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerName="oauth-openshift" containerID="cri-o://57686d57dced50c52b9d5d3436c0604f76a4afec7189e37e0899020bfe2dc486" gracePeriod=15 Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.649665 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:21 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:21 crc kubenswrapper[4792]: > Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.830565 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.842255 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:21 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:21 crc kubenswrapper[4792]: > Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.882912 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-v9gs9" Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.964210 4792 generic.go:334] "Generic (PLEG): container finished" podID="d29d0577-d9f9-4402-a79d-06557b2f2826" containerID="8e29a40dfe09cb121e3da4b2c5c6eb6653bc283d5d747da772efc7c941d61019" exitCode=1 Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.964297 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d29d0577-d9f9-4402-a79d-06557b2f2826","Type":"ContainerDied","Data":"8e29a40dfe09cb121e3da4b2c5c6eb6653bc283d5d747da772efc7c941d61019"} Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.968562 4792 generic.go:334] "Generic (PLEG): container finished" podID="14d78136-a62d-4252-adf4-f9830e9fe8c1" containerID="57686d57dced50c52b9d5d3436c0604f76a4afec7189e37e0899020bfe2dc486" exitCode=0 Mar 19 18:02:21 crc kubenswrapper[4792]: I0319 18:02:21.968664 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" event={"ID":"14d78136-a62d-4252-adf4-f9830e9fe8c1","Type":"ContainerDied","Data":"57686d57dced50c52b9d5d3436c0604f76a4afec7189e37e0899020bfe2dc486"} Mar 19 18:02:22 crc kubenswrapper[4792]: I0319 18:02:22.466599 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8559bd9b58-4dc8w" Mar 19 18:02:22 crc kubenswrapper[4792]: I0319 18:02:22.667567 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-625pf" Mar 19 18:02:22 crc kubenswrapper[4792]: I0319 18:02:22.980079 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" event={"ID":"14d78136-a62d-4252-adf4-f9830e9fe8c1","Type":"ContainerStarted","Data":"a7d104de715a9f66872dbd3547a8cf9e8f0e04eb2ff5f6c170743087f3413d56"} Mar 19 18:02:22 crc kubenswrapper[4792]: I0319 18:02:22.982241 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 18:02:22 crc kubenswrapper[4792]: I0319 18:02:22.984235 4792 generic.go:334] "Generic (PLEG): container finished" podID="582089ff-ec32-4c78-bb81-d650559d9659" containerID="21b14d90ba1210bbdea112cea9ea38a3a20d86cc0ae8b6b007e8e9a5938a896a" exitCode=0 Mar 19 18:02:22 crc kubenswrapper[4792]: I0319 18:02:22.984548 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" event={"ID":"582089ff-ec32-4c78-bb81-d650559d9659","Type":"ContainerDied","Data":"21b14d90ba1210bbdea112cea9ea38a3a20d86cc0ae8b6b007e8e9a5938a896a"} Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.163185 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.236560 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-kzh4h" Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.319761 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65556786d7-stv4d" Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.419942 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.419999 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.649592 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.705966 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-gdvnw" Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.808514 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nd7zd" Mar 19 18:02:23 crc kubenswrapper[4792]: I0319 18:02:23.998792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d29d0577-d9f9-4402-a79d-06557b2f2826","Type":"ContainerDied","Data":"c11343480aa900caeae361df8b2e66bfe23bb6442e5fa3a8288620d159c55dcc"} Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:23.999663 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11343480aa900caeae361df8b2e66bfe23bb6442e5fa3a8288620d159c55dcc" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.062406 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.140488 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gb64t" podUID="7a6583ed-1c62-448f-98f6-6055fe84c457" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:24 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:24 crc kubenswrapper[4792]: > Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.159313 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.242537 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-workdir\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.242605 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.242640 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config-secret\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.242712 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ssh-key\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.242813 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.242883 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxks5\" (UniqueName: \"kubernetes.io/projected/d29d0577-d9f9-4402-a79d-06557b2f2826-kube-api-access-kxks5\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.242962 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-config-data\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.243034 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ca-certs\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.243052 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-temporary\") pod \"d29d0577-d9f9-4402-a79d-06557b2f2826\" (UID: \"d29d0577-d9f9-4402-a79d-06557b2f2826\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.244183 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-config-data" (OuterVolumeSpecName: "config-data") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.244706 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.251691 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.274792 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d29d0577-d9f9-4402-a79d-06557b2f2826-kube-api-access-kxks5" (OuterVolumeSpecName: "kube-api-access-kxks5") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "kube-api-access-kxks5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.274898 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.275961 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:24 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:24 crc kubenswrapper[4792]: > Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.318427 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.337761 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.359494 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.359534 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.359548 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxks5\" (UniqueName: \"kubernetes.io/projected/d29d0577-d9f9-4402-a79d-06557b2f2826-kube-api-access-kxks5\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.359556 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d29d0577-d9f9-4402-a79d-06557b2f2826-config-data\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.359568 4792 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.359579 4792 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d29d0577-d9f9-4402-a79d-06557b2f2826-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.359603 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.361770 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.433050 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d29d0577-d9f9-4402-a79d-06557b2f2826" (UID: "d29d0577-d9f9-4402-a79d-06557b2f2826"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.462029 4792 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.464458 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.477146 4792 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d29d0577-d9f9-4402-a79d-06557b2f2826-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.477230 4792 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.620800 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.783388 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxb65\" (UniqueName: \"kubernetes.io/projected/582089ff-ec32-4c78-bb81-d650559d9659-kube-api-access-wxb65\") pod \"582089ff-ec32-4c78-bb81-d650559d9659\" (UID: \"582089ff-ec32-4c78-bb81-d650559d9659\") " Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.789025 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582089ff-ec32-4c78-bb81-d650559d9659-kube-api-access-wxb65" (OuterVolumeSpecName: "kube-api-access-wxb65") pod "582089ff-ec32-4c78-bb81-d650559d9659" (UID: "582089ff-ec32-4c78-bb81-d650559d9659"). InnerVolumeSpecName "kube-api-access-wxb65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.792890 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6cld2" Mar 19 18:02:24 crc kubenswrapper[4792]: I0319 18:02:24.886454 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxb65\" (UniqueName: \"kubernetes.io/projected/582089ff-ec32-4c78-bb81-d650559d9659-kube-api-access-wxb65\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:25 crc kubenswrapper[4792]: I0319 18:02:25.010701 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" event={"ID":"582089ff-ec32-4c78-bb81-d650559d9659","Type":"ContainerDied","Data":"540b4e101adc0964e9742dfd2c2578eff03665503150d145a22a498da56201bb"} Mar 19 18:02:25 crc kubenswrapper[4792]: I0319 18:02:25.010758 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540b4e101adc0964e9742dfd2c2578eff03665503150d145a22a498da56201bb" Mar 19 18:02:25 crc kubenswrapper[4792]: I0319 18:02:25.010764 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565722-4zmxd" Mar 19 18:02:25 crc kubenswrapper[4792]: I0319 18:02:25.011587 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 19 18:02:25 crc kubenswrapper[4792]: I0319 18:02:25.127108 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565716-7r5gc"] Mar 19 18:02:25 crc kubenswrapper[4792]: I0319 18:02:25.136858 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565716-7r5gc"] Mar 19 18:02:25 crc kubenswrapper[4792]: I0319 18:02:25.752183 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f166e811-2b95-4d7a-bf5e-872d7bc853fa" path="/var/lib/kubelet/pods/f166e811-2b95-4d7a-bf5e-872d7bc853fa/volumes" Mar 19 18:02:28 crc kubenswrapper[4792]: I0319 18:02:28.202700 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.787187 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 19 18:02:29 crc kubenswrapper[4792]: E0319 18:02:29.789661 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d29d0577-d9f9-4402-a79d-06557b2f2826" containerName="tempest-tests-tempest-tests-runner" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.793054 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d29d0577-d9f9-4402-a79d-06557b2f2826" containerName="tempest-tests-tempest-tests-runner" Mar 19 18:02:29 crc kubenswrapper[4792]: E0319 18:02:29.793541 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582089ff-ec32-4c78-bb81-d650559d9659" containerName="oc" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.793566 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="582089ff-ec32-4c78-bb81-d650559d9659" containerName="oc" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.794159 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="582089ff-ec32-4c78-bb81-d650559d9659" containerName="oc" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.794236 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d29d0577-d9f9-4402-a79d-06557b2f2826" containerName="tempest-tests-tempest-tests-runner" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.795422 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.797415 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mtvkb" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.802715 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.889066 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:29 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:29 crc kubenswrapper[4792]: > Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.906001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8c31ae1-ea58-447c-b4cd-d9d121bd5185\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:29 crc kubenswrapper[4792]: I0319 18:02:29.906103 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r5fq\" (UniqueName: \"kubernetes.io/projected/e8c31ae1-ea58-447c-b4cd-d9d121bd5185-kube-api-access-4r5fq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8c31ae1-ea58-447c-b4cd-d9d121bd5185\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:30 crc kubenswrapper[4792]: I0319 18:02:30.008616 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8c31ae1-ea58-447c-b4cd-d9d121bd5185\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:30 crc kubenswrapper[4792]: I0319 18:02:30.008734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r5fq\" (UniqueName: \"kubernetes.io/projected/e8c31ae1-ea58-447c-b4cd-d9d121bd5185-kube-api-access-4r5fq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8c31ae1-ea58-447c-b4cd-d9d121bd5185\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:30 crc kubenswrapper[4792]: I0319 18:02:30.009757 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8c31ae1-ea58-447c-b4cd-d9d121bd5185\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:30 crc kubenswrapper[4792]: I0319 18:02:30.054077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r5fq\" (UniqueName: \"kubernetes.io/projected/e8c31ae1-ea58-447c-b4cd-d9d121bd5185-kube-api-access-4r5fq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8c31ae1-ea58-447c-b4cd-d9d121bd5185\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:30 crc kubenswrapper[4792]: I0319 18:02:30.072472 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e8c31ae1-ea58-447c-b4cd-d9d121bd5185\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:30 crc kubenswrapper[4792]: I0319 18:02:30.190394 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 19 18:02:30 crc kubenswrapper[4792]: I0319 18:02:30.764607 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 19 18:02:30 crc kubenswrapper[4792]: W0319 18:02:30.764895 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8c31ae1_ea58_447c_b4cd_d9d121bd5185.slice/crio-5987945568e002ce1bb82d818fc76ea5aefb140efcf09f3c5c76cdcaa25d7d23 WatchSource:0}: Error finding container 5987945568e002ce1bb82d818fc76ea5aefb140efcf09f3c5c76cdcaa25d7d23: Status 404 returned error can't find the container with id 5987945568e002ce1bb82d818fc76ea5aefb140efcf09f3c5c76cdcaa25d7d23 Mar 19 18:02:31 crc kubenswrapper[4792]: I0319 18:02:31.074357 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e8c31ae1-ea58-447c-b4cd-d9d121bd5185","Type":"ContainerStarted","Data":"5987945568e002ce1bb82d818fc76ea5aefb140efcf09f3c5c76cdcaa25d7d23"} Mar 19 18:02:31 crc kubenswrapper[4792]: I0319 18:02:31.716384 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5hq59" podUID="380412c4-57ca-4428-838c-ab93fc6c71cc" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:31 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:31 crc kubenswrapper[4792]: > Mar 19 18:02:31 crc kubenswrapper[4792]: I0319 18:02:31.844423 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:31 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:31 crc kubenswrapper[4792]: > Mar 19 18:02:33 crc kubenswrapper[4792]: I0319 18:02:33.103393 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gb64t" Mar 19 18:02:33 crc kubenswrapper[4792]: I0319 18:02:33.162744 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gb64t" Mar 19 18:02:34 crc kubenswrapper[4792]: I0319 18:02:34.104631 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e8c31ae1-ea58-447c-b4cd-d9d121bd5185","Type":"ContainerStarted","Data":"3206ac30b8e0191bf6787013d3706ae7342b9d1eaec93163bba0b9f459568426"} Mar 19 18:02:34 crc kubenswrapper[4792]: I0319 18:02:34.130561 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.376023569 podStartE2EDuration="5.130541523s" podCreationTimestamp="2026-03-19 18:02:29 +0000 UTC" firstStartedPulling="2026-03-19 18:02:30.782191607 +0000 UTC m=+4913.928249147" lastFinishedPulling="2026-03-19 18:02:33.536709561 +0000 UTC m=+4916.682767101" observedRunningTime="2026-03-19 18:02:34.116858629 +0000 UTC m=+4917.262916179" watchObservedRunningTime="2026-03-19 18:02:34.130541523 +0000 UTC m=+4917.276599063" Mar 19 18:02:34 crc kubenswrapper[4792]: I0319 18:02:34.296058 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8m42q" podUID="84114ace-d7fd-41a3-9fa6-87df44501023" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:34 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:34 crc kubenswrapper[4792]: > Mar 19 18:02:39 crc kubenswrapper[4792]: I0319 18:02:39.156634 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:02:39 crc kubenswrapper[4792]: I0319 18:02:39.206251 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:02:40 crc kubenswrapper[4792]: I0319 18:02:40.990800 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 18:02:41 crc kubenswrapper[4792]: I0319 18:02:41.132943 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5hq59" Mar 19 18:02:41 crc kubenswrapper[4792]: I0319 18:02:41.751630 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9q2vd" Mar 19 18:02:41 crc kubenswrapper[4792]: I0319 18:02:41.970006 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:41 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:41 crc kubenswrapper[4792]: > Mar 19 18:02:42 crc kubenswrapper[4792]: I0319 18:02:42.179270 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c96bc4ccc-fw8z7" Mar 19 18:02:43 crc kubenswrapper[4792]: I0319 18:02:43.258681 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 18:02:43 crc kubenswrapper[4792]: I0319 18:02:43.319001 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8m42q" Mar 19 18:02:49 crc kubenswrapper[4792]: I0319 18:02:49.614568 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8dv94"] Mar 19 18:02:49 crc kubenswrapper[4792]: I0319 18:02:49.615365 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8dv94" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" containerID="cri-o://b220d650719d4981fd5a76fbf757ba5a1921cc99fc81b8fb12a8391c07b39a74" gracePeriod=2 Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.347230 4792 generic.go:334] "Generic (PLEG): container finished" podID="b550a284-5a60-4772-a518-0beec88de1ba" containerID="b220d650719d4981fd5a76fbf757ba5a1921cc99fc81b8fb12a8391c07b39a74" exitCode=0 Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.347258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dv94" event={"ID":"b550a284-5a60-4772-a518-0beec88de1ba","Type":"ContainerDied","Data":"b220d650719d4981fd5a76fbf757ba5a1921cc99fc81b8fb12a8391c07b39a74"} Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.530253 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.665788 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk4sz\" (UniqueName: \"kubernetes.io/projected/b550a284-5a60-4772-a518-0beec88de1ba-kube-api-access-wk4sz\") pod \"b550a284-5a60-4772-a518-0beec88de1ba\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.665941 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-catalog-content\") pod \"b550a284-5a60-4772-a518-0beec88de1ba\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.666164 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-utilities\") pod \"b550a284-5a60-4772-a518-0beec88de1ba\" (UID: \"b550a284-5a60-4772-a518-0beec88de1ba\") " Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.667200 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-utilities" (OuterVolumeSpecName: "utilities") pod "b550a284-5a60-4772-a518-0beec88de1ba" (UID: "b550a284-5a60-4772-a518-0beec88de1ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.729070 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b550a284-5a60-4772-a518-0beec88de1ba-kube-api-access-wk4sz" (OuterVolumeSpecName: "kube-api-access-wk4sz") pod "b550a284-5a60-4772-a518-0beec88de1ba" (UID: "b550a284-5a60-4772-a518-0beec88de1ba"). InnerVolumeSpecName "kube-api-access-wk4sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.801781 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b550a284-5a60-4772-a518-0beec88de1ba" (UID: "b550a284-5a60-4772-a518-0beec88de1ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.820404 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.820447 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b550a284-5a60-4772-a518-0beec88de1ba-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:50 crc kubenswrapper[4792]: I0319 18:02:50.820460 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk4sz\" (UniqueName: \"kubernetes.io/projected/b550a284-5a60-4772-a518-0beec88de1ba-kube-api-access-wk4sz\") on node \"crc\" DevicePath \"\"" Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.361312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dv94" event={"ID":"b550a284-5a60-4772-a518-0beec88de1ba","Type":"ContainerDied","Data":"3a83aa4be91326f7f35a656e9264bdd990956107282c91f512450b3a6b6ec435"} Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.361868 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dv94" Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.362449 4792 scope.go:117] "RemoveContainer" containerID="b220d650719d4981fd5a76fbf757ba5a1921cc99fc81b8fb12a8391c07b39a74" Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.391076 4792 scope.go:117] "RemoveContainer" containerID="4bc3d940fb7ed7afb508e46876dbde0bdfc44bdaa8dd7f7c7c843dc52842e694" Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.417495 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8dv94"] Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.422399 4792 scope.go:117] "RemoveContainer" containerID="022e62e761cb4e0181760b816006deab7b48b82c5ddc92d09f49b34b99fefbe6" Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.432620 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8dv94"] Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.754748 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b550a284-5a60-4772-a518-0beec88de1ba" path="/var/lib/kubelet/pods/b550a284-5a60-4772-a518-0beec88de1ba/volumes" Mar 19 18:02:51 crc kubenswrapper[4792]: I0319 18:02:51.857286 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h7gpk" podUID="9faaddd3-77ad-4bc9-97ce-21a824aeb1c0" containerName="registry-server" probeResult="failure" output=< Mar 19 18:02:51 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:02:51 crc kubenswrapper[4792]: > Mar 19 18:02:56 crc kubenswrapper[4792]: I0319 18:02:56.798638 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 19 18:03:00 crc kubenswrapper[4792]: I0319 18:03:00.841195 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 18:03:00 crc kubenswrapper[4792]: I0319 18:03:00.892349 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h7gpk" Mar 19 18:03:10 crc kubenswrapper[4792]: I0319 18:03:10.319314 4792 scope.go:117] "RemoveContainer" containerID="844b13116fe2e66c1f3ecea024ecd0aea7692b7391847b7f030c8931aafae78e" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.324484 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdjk2/must-gather-tf6s4"] Mar 19 18:03:18 crc kubenswrapper[4792]: E0319 18:03:18.325333 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.325346 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" Mar 19 18:03:18 crc kubenswrapper[4792]: E0319 18:03:18.325379 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="extract-utilities" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.325385 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="extract-utilities" Mar 19 18:03:18 crc kubenswrapper[4792]: E0319 18:03:18.325403 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="extract-content" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.325409 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="extract-content" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.325649 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b550a284-5a60-4772-a518-0beec88de1ba" containerName="registry-server" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.326959 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.329357 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rdjk2"/"default-dockercfg-fzfdw" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.330114 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rdjk2"/"openshift-service-ca.crt" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.330136 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rdjk2"/"kube-root-ca.crt" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.457984 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z892c\" (UniqueName: \"kubernetes.io/projected/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-kube-api-access-z892c\") pod \"must-gather-tf6s4\" (UID: \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\") " pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.462201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-must-gather-output\") pod \"must-gather-tf6s4\" (UID: \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\") " pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.470098 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rdjk2/must-gather-tf6s4"] Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.564898 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-must-gather-output\") pod \"must-gather-tf6s4\" (UID: \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\") " pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.565061 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z892c\" (UniqueName: \"kubernetes.io/projected/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-kube-api-access-z892c\") pod \"must-gather-tf6s4\" (UID: \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\") " pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.565996 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-must-gather-output\") pod \"must-gather-tf6s4\" (UID: \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\") " pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.590821 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z892c\" (UniqueName: \"kubernetes.io/projected/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-kube-api-access-z892c\") pod \"must-gather-tf6s4\" (UID: \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\") " pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:03:18 crc kubenswrapper[4792]: I0319 18:03:18.646508 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:03:19 crc kubenswrapper[4792]: I0319 18:03:19.621176 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rdjk2/must-gather-tf6s4"] Mar 19 18:03:19 crc kubenswrapper[4792]: I0319 18:03:19.631084 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 18:03:19 crc kubenswrapper[4792]: I0319 18:03:19.708932 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" event={"ID":"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef","Type":"ContainerStarted","Data":"9fce6f291fac174ab1ef0b3e0179d8655bae9b0b7e06127867e02b99fba05206"} Mar 19 18:03:29 crc kubenswrapper[4792]: I0319 18:03:29.839784 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" event={"ID":"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef","Type":"ContainerStarted","Data":"ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85"} Mar 19 18:03:29 crc kubenswrapper[4792]: I0319 18:03:29.840423 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" event={"ID":"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef","Type":"ContainerStarted","Data":"803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b"} Mar 19 18:03:29 crc kubenswrapper[4792]: I0319 18:03:29.868981 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" podStartSLOduration=3.0608657 podStartE2EDuration="11.868957934s" podCreationTimestamp="2026-03-19 18:03:18 +0000 UTC" firstStartedPulling="2026-03-19 18:03:19.629244697 +0000 UTC m=+4962.775302257" lastFinishedPulling="2026-03-19 18:03:28.437336951 +0000 UTC m=+4971.583394491" observedRunningTime="2026-03-19 18:03:29.856070272 +0000 UTC m=+4973.002127822" watchObservedRunningTime="2026-03-19 18:03:29.868957934 +0000 UTC m=+4973.015015484" Mar 19 18:03:34 crc kubenswrapper[4792]: I0319 18:03:34.943424 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-6tvph"] Mar 19 18:03:34 crc kubenswrapper[4792]: I0319 18:03:34.946800 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:03:35 crc kubenswrapper[4792]: I0319 18:03:35.037164 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn8fq\" (UniqueName: \"kubernetes.io/projected/97200a55-292d-4e37-8174-52017030a152-kube-api-access-nn8fq\") pod \"crc-debug-6tvph\" (UID: \"97200a55-292d-4e37-8174-52017030a152\") " pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:03:35 crc kubenswrapper[4792]: I0319 18:03:35.038178 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97200a55-292d-4e37-8174-52017030a152-host\") pod \"crc-debug-6tvph\" (UID: \"97200a55-292d-4e37-8174-52017030a152\") " pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:03:35 crc kubenswrapper[4792]: I0319 18:03:35.141965 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8fq\" (UniqueName: \"kubernetes.io/projected/97200a55-292d-4e37-8174-52017030a152-kube-api-access-nn8fq\") pod \"crc-debug-6tvph\" (UID: \"97200a55-292d-4e37-8174-52017030a152\") " pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:03:35 crc kubenswrapper[4792]: I0319 18:03:35.142404 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97200a55-292d-4e37-8174-52017030a152-host\") pod \"crc-debug-6tvph\" (UID: \"97200a55-292d-4e37-8174-52017030a152\") " pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:03:35 crc kubenswrapper[4792]: I0319 18:03:35.143293 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97200a55-292d-4e37-8174-52017030a152-host\") pod \"crc-debug-6tvph\" (UID: \"97200a55-292d-4e37-8174-52017030a152\") " pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:03:35 crc kubenswrapper[4792]: I0319 18:03:35.170451 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn8fq\" (UniqueName: \"kubernetes.io/projected/97200a55-292d-4e37-8174-52017030a152-kube-api-access-nn8fq\") pod \"crc-debug-6tvph\" (UID: \"97200a55-292d-4e37-8174-52017030a152\") " pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:03:35 crc kubenswrapper[4792]: I0319 18:03:35.268798 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:03:35 crc kubenswrapper[4792]: I0319 18:03:35.909609 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/crc-debug-6tvph" event={"ID":"97200a55-292d-4e37-8174-52017030a152","Type":"ContainerStarted","Data":"e6ee5c37b77d42c673008fe89d23444cc724b40d72f76b61b279fbe08645c112"} Mar 19 18:03:47 crc kubenswrapper[4792]: I0319 18:03:47.046594 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/crc-debug-6tvph" event={"ID":"97200a55-292d-4e37-8174-52017030a152","Type":"ContainerStarted","Data":"33118fa182d2393d6323b272d12b524383f4f703b5ba06c7b571b4eef1447489"} Mar 19 18:03:47 crc kubenswrapper[4792]: I0319 18:03:47.067577 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rdjk2/crc-debug-6tvph" podStartSLOduration=1.9319772039999998 podStartE2EDuration="13.067555992s" podCreationTimestamp="2026-03-19 18:03:34 +0000 UTC" firstStartedPulling="2026-03-19 18:03:35.329600358 +0000 UTC m=+4978.475657888" lastFinishedPulling="2026-03-19 18:03:46.465179136 +0000 UTC m=+4989.611236676" observedRunningTime="2026-03-19 18:03:47.057445896 +0000 UTC m=+4990.203503436" watchObservedRunningTime="2026-03-19 18:03:47.067555992 +0000 UTC m=+4990.213613522" Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.197724 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565724-45hfn"] Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.200573 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565724-45hfn" Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.204469 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.204557 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.204659 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.215065 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565724-45hfn"] Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.258863 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5n8d\" (UniqueName: \"kubernetes.io/projected/5cf652b5-229b-4714-b196-5f51ff19afe2-kube-api-access-r5n8d\") pod \"auto-csr-approver-29565724-45hfn\" (UID: \"5cf652b5-229b-4714-b196-5f51ff19afe2\") " pod="openshift-infra/auto-csr-approver-29565724-45hfn" Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.361546 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5n8d\" (UniqueName: \"kubernetes.io/projected/5cf652b5-229b-4714-b196-5f51ff19afe2-kube-api-access-r5n8d\") pod \"auto-csr-approver-29565724-45hfn\" (UID: \"5cf652b5-229b-4714-b196-5f51ff19afe2\") " pod="openshift-infra/auto-csr-approver-29565724-45hfn" Mar 19 18:04:00 crc kubenswrapper[4792]: I0319 18:04:00.384904 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5n8d\" (UniqueName: \"kubernetes.io/projected/5cf652b5-229b-4714-b196-5f51ff19afe2-kube-api-access-r5n8d\") pod \"auto-csr-approver-29565724-45hfn\" (UID: \"5cf652b5-229b-4714-b196-5f51ff19afe2\") " pod="openshift-infra/auto-csr-approver-29565724-45hfn" Mar 19 18:04:01 crc kubenswrapper[4792]: I0319 18:04:01.621332 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565724-45hfn" Mar 19 18:04:02 crc kubenswrapper[4792]: I0319 18:04:02.868526 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565724-45hfn"] Mar 19 18:04:04 crc kubenswrapper[4792]: I0319 18:04:04.249766 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565724-45hfn" event={"ID":"5cf652b5-229b-4714-b196-5f51ff19afe2","Type":"ContainerStarted","Data":"0b9a5fa0baf07cd60574fb99f6818029c3c36adb9a036677ce2aa57bbc6d8539"} Mar 19 18:04:07 crc kubenswrapper[4792]: I0319 18:04:07.293485 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565724-45hfn" event={"ID":"5cf652b5-229b-4714-b196-5f51ff19afe2","Type":"ContainerStarted","Data":"9fa1b015a77b00e4868129e826b20cb301d37c124354d04e5a258f708ec43675"} Mar 19 18:04:07 crc kubenswrapper[4792]: I0319 18:04:07.313692 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565724-45hfn" podStartSLOduration=4.99611824 podStartE2EDuration="7.313664259s" podCreationTimestamp="2026-03-19 18:04:00 +0000 UTC" firstStartedPulling="2026-03-19 18:04:03.642344605 +0000 UTC m=+5006.788402155" lastFinishedPulling="2026-03-19 18:04:05.959890634 +0000 UTC m=+5009.105948174" observedRunningTime="2026-03-19 18:04:07.305175737 +0000 UTC m=+5010.451233277" watchObservedRunningTime="2026-03-19 18:04:07.313664259 +0000 UTC m=+5010.459721799" Mar 19 18:04:08 crc kubenswrapper[4792]: I0319 18:04:08.304681 4792 generic.go:334] "Generic (PLEG): container finished" podID="5cf652b5-229b-4714-b196-5f51ff19afe2" containerID="9fa1b015a77b00e4868129e826b20cb301d37c124354d04e5a258f708ec43675" exitCode=0 Mar 19 18:04:08 crc kubenswrapper[4792]: I0319 18:04:08.304787 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565724-45hfn" event={"ID":"5cf652b5-229b-4714-b196-5f51ff19afe2","Type":"ContainerDied","Data":"9fa1b015a77b00e4868129e826b20cb301d37c124354d04e5a258f708ec43675"} Mar 19 18:04:09 crc kubenswrapper[4792]: I0319 18:04:09.782192 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565724-45hfn" Mar 19 18:04:09 crc kubenswrapper[4792]: I0319 18:04:09.906611 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5n8d\" (UniqueName: \"kubernetes.io/projected/5cf652b5-229b-4714-b196-5f51ff19afe2-kube-api-access-r5n8d\") pod \"5cf652b5-229b-4714-b196-5f51ff19afe2\" (UID: \"5cf652b5-229b-4714-b196-5f51ff19afe2\") " Mar 19 18:04:09 crc kubenswrapper[4792]: I0319 18:04:09.916396 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf652b5-229b-4714-b196-5f51ff19afe2-kube-api-access-r5n8d" (OuterVolumeSpecName: "kube-api-access-r5n8d") pod "5cf652b5-229b-4714-b196-5f51ff19afe2" (UID: "5cf652b5-229b-4714-b196-5f51ff19afe2"). InnerVolumeSpecName "kube-api-access-r5n8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:04:10 crc kubenswrapper[4792]: I0319 18:04:10.010440 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5n8d\" (UniqueName: \"kubernetes.io/projected/5cf652b5-229b-4714-b196-5f51ff19afe2-kube-api-access-r5n8d\") on node \"crc\" DevicePath \"\"" Mar 19 18:04:10 crc kubenswrapper[4792]: I0319 18:04:10.329069 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565724-45hfn" event={"ID":"5cf652b5-229b-4714-b196-5f51ff19afe2","Type":"ContainerDied","Data":"0b9a5fa0baf07cd60574fb99f6818029c3c36adb9a036677ce2aa57bbc6d8539"} Mar 19 18:04:10 crc kubenswrapper[4792]: I0319 18:04:10.329110 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b9a5fa0baf07cd60574fb99f6818029c3c36adb9a036677ce2aa57bbc6d8539" Mar 19 18:04:10 crc kubenswrapper[4792]: I0319 18:04:10.329114 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565724-45hfn" Mar 19 18:04:10 crc kubenswrapper[4792]: I0319 18:04:10.421573 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565718-j7wlx"] Mar 19 18:04:10 crc kubenswrapper[4792]: I0319 18:04:10.432108 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565718-j7wlx"] Mar 19 18:04:11 crc kubenswrapper[4792]: I0319 18:04:11.751635 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1adc70a-0ee6-4cd8-a5d8-89f968d785a6" path="/var/lib/kubelet/pods/c1adc70a-0ee6-4cd8-a5d8-89f968d785a6/volumes" Mar 19 18:04:20 crc kubenswrapper[4792]: I0319 18:04:20.231082 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:04:20 crc kubenswrapper[4792]: I0319 18:04:20.231703 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:04:30 crc kubenswrapper[4792]: I0319 18:04:30.590327 4792 generic.go:334] "Generic (PLEG): container finished" podID="97200a55-292d-4e37-8174-52017030a152" containerID="33118fa182d2393d6323b272d12b524383f4f703b5ba06c7b571b4eef1447489" exitCode=0 Mar 19 18:04:30 crc kubenswrapper[4792]: I0319 18:04:30.590430 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/crc-debug-6tvph" event={"ID":"97200a55-292d-4e37-8174-52017030a152","Type":"ContainerDied","Data":"33118fa182d2393d6323b272d12b524383f4f703b5ba06c7b571b4eef1447489"} Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.756214 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.808492 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-6tvph"] Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.820301 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-6tvph"] Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.860990 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn8fq\" (UniqueName: \"kubernetes.io/projected/97200a55-292d-4e37-8174-52017030a152-kube-api-access-nn8fq\") pod \"97200a55-292d-4e37-8174-52017030a152\" (UID: \"97200a55-292d-4e37-8174-52017030a152\") " Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.861150 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97200a55-292d-4e37-8174-52017030a152-host\") pod \"97200a55-292d-4e37-8174-52017030a152\" (UID: \"97200a55-292d-4e37-8174-52017030a152\") " Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.861234 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97200a55-292d-4e37-8174-52017030a152-host" (OuterVolumeSpecName: "host") pod "97200a55-292d-4e37-8174-52017030a152" (UID: "97200a55-292d-4e37-8174-52017030a152"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.861779 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97200a55-292d-4e37-8174-52017030a152-host\") on node \"crc\" DevicePath \"\"" Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.867886 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97200a55-292d-4e37-8174-52017030a152-kube-api-access-nn8fq" (OuterVolumeSpecName: "kube-api-access-nn8fq") pod "97200a55-292d-4e37-8174-52017030a152" (UID: "97200a55-292d-4e37-8174-52017030a152"). InnerVolumeSpecName "kube-api-access-nn8fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:04:31 crc kubenswrapper[4792]: I0319 18:04:31.963939 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn8fq\" (UniqueName: \"kubernetes.io/projected/97200a55-292d-4e37-8174-52017030a152-kube-api-access-nn8fq\") on node \"crc\" DevicePath \"\"" Mar 19 18:04:32 crc kubenswrapper[4792]: I0319 18:04:32.629563 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ee5c37b77d42c673008fe89d23444cc724b40d72f76b61b279fbe08645c112" Mar 19 18:04:32 crc kubenswrapper[4792]: I0319 18:04:32.629635 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-6tvph" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.042498 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-9tv2h"] Mar 19 18:04:33 crc kubenswrapper[4792]: E0319 18:04:33.043526 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf652b5-229b-4714-b196-5f51ff19afe2" containerName="oc" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.043542 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf652b5-229b-4714-b196-5f51ff19afe2" containerName="oc" Mar 19 18:04:33 crc kubenswrapper[4792]: E0319 18:04:33.043574 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97200a55-292d-4e37-8174-52017030a152" containerName="container-00" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.043580 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="97200a55-292d-4e37-8174-52017030a152" containerName="container-00" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.043965 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf652b5-229b-4714-b196-5f51ff19afe2" containerName="oc" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.043989 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="97200a55-292d-4e37-8174-52017030a152" containerName="container-00" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.044893 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.194714 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-756l7\" (UniqueName: \"kubernetes.io/projected/a42c36c8-bd0e-4855-bddc-0240a5d65638-kube-api-access-756l7\") pod \"crc-debug-9tv2h\" (UID: \"a42c36c8-bd0e-4855-bddc-0240a5d65638\") " pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.194798 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a42c36c8-bd0e-4855-bddc-0240a5d65638-host\") pod \"crc-debug-9tv2h\" (UID: \"a42c36c8-bd0e-4855-bddc-0240a5d65638\") " pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.296597 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756l7\" (UniqueName: \"kubernetes.io/projected/a42c36c8-bd0e-4855-bddc-0240a5d65638-kube-api-access-756l7\") pod \"crc-debug-9tv2h\" (UID: \"a42c36c8-bd0e-4855-bddc-0240a5d65638\") " pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.296936 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a42c36c8-bd0e-4855-bddc-0240a5d65638-host\") pod \"crc-debug-9tv2h\" (UID: \"a42c36c8-bd0e-4855-bddc-0240a5d65638\") " pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.297171 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a42c36c8-bd0e-4855-bddc-0240a5d65638-host\") pod \"crc-debug-9tv2h\" (UID: \"a42c36c8-bd0e-4855-bddc-0240a5d65638\") " pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.323231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756l7\" (UniqueName: \"kubernetes.io/projected/a42c36c8-bd0e-4855-bddc-0240a5d65638-kube-api-access-756l7\") pod \"crc-debug-9tv2h\" (UID: \"a42c36c8-bd0e-4855-bddc-0240a5d65638\") " pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.373159 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.639376 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" event={"ID":"a42c36c8-bd0e-4855-bddc-0240a5d65638","Type":"ContainerStarted","Data":"8b417a84399417ed643e8e9b9ec9e128c0837ee05aa00a8b17abe2365f4291f1"} Mar 19 18:04:33 crc kubenswrapper[4792]: I0319 18:04:33.754625 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97200a55-292d-4e37-8174-52017030a152" path="/var/lib/kubelet/pods/97200a55-292d-4e37-8174-52017030a152/volumes" Mar 19 18:04:34 crc kubenswrapper[4792]: I0319 18:04:34.651745 4792 generic.go:334] "Generic (PLEG): container finished" podID="a42c36c8-bd0e-4855-bddc-0240a5d65638" containerID="86020ccddfc363f7071d15c5faf1f61d3ce68f5d6bd48bbdcc335bc74ce15d24" exitCode=0 Mar 19 18:04:34 crc kubenswrapper[4792]: I0319 18:04:34.651809 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" event={"ID":"a42c36c8-bd0e-4855-bddc-0240a5d65638","Type":"ContainerDied","Data":"86020ccddfc363f7071d15c5faf1f61d3ce68f5d6bd48bbdcc335bc74ce15d24"} Mar 19 18:04:35 crc kubenswrapper[4792]: I0319 18:04:35.515776 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-9tv2h"] Mar 19 18:04:35 crc kubenswrapper[4792]: I0319 18:04:35.526622 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-9tv2h"] Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.259095 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.370899 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a42c36c8-bd0e-4855-bddc-0240a5d65638-host\") pod \"a42c36c8-bd0e-4855-bddc-0240a5d65638\" (UID: \"a42c36c8-bd0e-4855-bddc-0240a5d65638\") " Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.371025 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42c36c8-bd0e-4855-bddc-0240a5d65638-host" (OuterVolumeSpecName: "host") pod "a42c36c8-bd0e-4855-bddc-0240a5d65638" (UID: "a42c36c8-bd0e-4855-bddc-0240a5d65638"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.371526 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-756l7\" (UniqueName: \"kubernetes.io/projected/a42c36c8-bd0e-4855-bddc-0240a5d65638-kube-api-access-756l7\") pod \"a42c36c8-bd0e-4855-bddc-0240a5d65638\" (UID: \"a42c36c8-bd0e-4855-bddc-0240a5d65638\") " Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.372696 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a42c36c8-bd0e-4855-bddc-0240a5d65638-host\") on node \"crc\" DevicePath \"\"" Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.377424 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42c36c8-bd0e-4855-bddc-0240a5d65638-kube-api-access-756l7" (OuterVolumeSpecName: "kube-api-access-756l7") pod "a42c36c8-bd0e-4855-bddc-0240a5d65638" (UID: "a42c36c8-bd0e-4855-bddc-0240a5d65638"). InnerVolumeSpecName "kube-api-access-756l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.474811 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-756l7\" (UniqueName: \"kubernetes.io/projected/a42c36c8-bd0e-4855-bddc-0240a5d65638-kube-api-access-756l7\") on node \"crc\" DevicePath \"\"" Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.675643 4792 scope.go:117] "RemoveContainer" containerID="86020ccddfc363f7071d15c5faf1f61d3ce68f5d6bd48bbdcc335bc74ce15d24" Mar 19 18:04:36 crc kubenswrapper[4792]: I0319 18:04:36.675672 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-9tv2h" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.296613 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-xr9sw"] Mar 19 18:04:37 crc kubenswrapper[4792]: E0319 18:04:37.298523 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42c36c8-bd0e-4855-bddc-0240a5d65638" containerName="container-00" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.298550 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42c36c8-bd0e-4855-bddc-0240a5d65638" containerName="container-00" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.299347 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42c36c8-bd0e-4855-bddc-0240a5d65638" containerName="container-00" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.300925 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.396086 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-host\") pod \"crc-debug-xr9sw\" (UID: \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\") " pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.396298 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmfh\" (UniqueName: \"kubernetes.io/projected/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-kube-api-access-ksmfh\") pod \"crc-debug-xr9sw\" (UID: \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\") " pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.497960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmfh\" (UniqueName: \"kubernetes.io/projected/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-kube-api-access-ksmfh\") pod \"crc-debug-xr9sw\" (UID: \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\") " pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.498089 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-host\") pod \"crc-debug-xr9sw\" (UID: \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\") " pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.498236 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-host\") pod \"crc-debug-xr9sw\" (UID: \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\") " pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:37 crc kubenswrapper[4792]: I0319 18:04:37.758179 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42c36c8-bd0e-4855-bddc-0240a5d65638" path="/var/lib/kubelet/pods/a42c36c8-bd0e-4855-bddc-0240a5d65638/volumes" Mar 19 18:04:38 crc kubenswrapper[4792]: I0319 18:04:38.212219 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmfh\" (UniqueName: \"kubernetes.io/projected/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-kube-api-access-ksmfh\") pod \"crc-debug-xr9sw\" (UID: \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\") " pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:38 crc kubenswrapper[4792]: I0319 18:04:38.220906 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:38 crc kubenswrapper[4792]: I0319 18:04:38.701434 4792 generic.go:334] "Generic (PLEG): container finished" podID="6bad2ac5-bbc4-4f0b-9f83-ec01944321bb" containerID="2923d6f4840ceb3e4939034adf2f1e8d7786a4c9ff5e8fb262ff151de925d338" exitCode=0 Mar 19 18:04:38 crc kubenswrapper[4792]: I0319 18:04:38.701729 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" event={"ID":"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb","Type":"ContainerDied","Data":"2923d6f4840ceb3e4939034adf2f1e8d7786a4c9ff5e8fb262ff151de925d338"} Mar 19 18:04:38 crc kubenswrapper[4792]: I0319 18:04:38.701764 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" event={"ID":"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb","Type":"ContainerStarted","Data":"7265371f63ace9cef84613f82411d528f74bc9b1a7d4313a089bdf65786d6e8f"} Mar 19 18:04:38 crc kubenswrapper[4792]: I0319 18:04:38.748243 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-xr9sw"] Mar 19 18:04:38 crc kubenswrapper[4792]: I0319 18:04:38.764054 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdjk2/crc-debug-xr9sw"] Mar 19 18:04:39 crc kubenswrapper[4792]: I0319 18:04:39.865964 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:39 crc kubenswrapper[4792]: I0319 18:04:39.980955 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-host\") pod \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\" (UID: \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\") " Mar 19 18:04:39 crc kubenswrapper[4792]: I0319 18:04:39.981334 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmfh\" (UniqueName: \"kubernetes.io/projected/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-kube-api-access-ksmfh\") pod \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\" (UID: \"6bad2ac5-bbc4-4f0b-9f83-ec01944321bb\") " Mar 19 18:04:39 crc kubenswrapper[4792]: I0319 18:04:39.982006 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-host" (OuterVolumeSpecName: "host") pod "6bad2ac5-bbc4-4f0b-9f83-ec01944321bb" (UID: "6bad2ac5-bbc4-4f0b-9f83-ec01944321bb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 18:04:39 crc kubenswrapper[4792]: I0319 18:04:39.991986 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-kube-api-access-ksmfh" (OuterVolumeSpecName: "kube-api-access-ksmfh") pod "6bad2ac5-bbc4-4f0b-9f83-ec01944321bb" (UID: "6bad2ac5-bbc4-4f0b-9f83-ec01944321bb"). InnerVolumeSpecName "kube-api-access-ksmfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:04:40 crc kubenswrapper[4792]: I0319 18:04:40.084581 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksmfh\" (UniqueName: \"kubernetes.io/projected/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-kube-api-access-ksmfh\") on node \"crc\" DevicePath \"\"" Mar 19 18:04:40 crc kubenswrapper[4792]: I0319 18:04:40.084631 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb-host\") on node \"crc\" DevicePath \"\"" Mar 19 18:04:40 crc kubenswrapper[4792]: I0319 18:04:40.730721 4792 scope.go:117] "RemoveContainer" containerID="2923d6f4840ceb3e4939034adf2f1e8d7786a4c9ff5e8fb262ff151de925d338" Mar 19 18:04:40 crc kubenswrapper[4792]: I0319 18:04:40.730763 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/crc-debug-xr9sw" Mar 19 18:04:41 crc kubenswrapper[4792]: I0319 18:04:41.756920 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bad2ac5-bbc4-4f0b-9f83-ec01944321bb" path="/var/lib/kubelet/pods/6bad2ac5-bbc4-4f0b-9f83-ec01944321bb/volumes" Mar 19 18:04:50 crc kubenswrapper[4792]: I0319 18:04:50.231819 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:04:50 crc kubenswrapper[4792]: I0319 18:04:50.232404 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:05:10 crc kubenswrapper[4792]: I0319 18:05:10.709415 4792 scope.go:117] "RemoveContainer" containerID="c821492a41e4e28213fd5c8fd0480a7958a2672bafdfa46b315b1fcd778bac3c" Mar 19 18:05:15 crc kubenswrapper[4792]: I0319 18:05:15.692737 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5d8206f4-d2ae-4db9-9893-091e0f602d74/aodh-api/0.log" Mar 19 18:05:15 crc kubenswrapper[4792]: I0319 18:05:15.903648 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5d8206f4-d2ae-4db9-9893-091e0f602d74/aodh-evaluator/0.log" Mar 19 18:05:15 crc kubenswrapper[4792]: I0319 18:05:15.985863 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5d8206f4-d2ae-4db9-9893-091e0f602d74/aodh-notifier/0.log" Mar 19 18:05:15 crc kubenswrapper[4792]: I0319 18:05:15.987664 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5d8206f4-d2ae-4db9-9893-091e0f602d74/aodh-listener/0.log" Mar 19 18:05:16 crc kubenswrapper[4792]: I0319 18:05:16.152096 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c5884965b-5vqgk_906667a8-fd5c-499a-9e1c-6fc52661d893/barbican-api/0.log" Mar 19 18:05:16 crc kubenswrapper[4792]: I0319 18:05:16.175214 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c5884965b-5vqgk_906667a8-fd5c-499a-9e1c-6fc52661d893/barbican-api-log/0.log" Mar 19 18:05:16 crc kubenswrapper[4792]: I0319 18:05:16.345252 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6bf88c4df4-qpgln_2b323aac-f5a2-4adf-8d27-3c1b194b3b3f/barbican-keystone-listener/0.log" Mar 19 18:05:16 crc kubenswrapper[4792]: I0319 18:05:16.510919 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6bf88c4df4-qpgln_2b323aac-f5a2-4adf-8d27-3c1b194b3b3f/barbican-keystone-listener-log/0.log" Mar 19 18:05:16 crc kubenswrapper[4792]: I0319 18:05:16.573994 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6766d67c9f-bwz6s_053f5f35-2164-43eb-9223-f36a5de46700/barbican-worker/0.log" Mar 19 18:05:16 crc kubenswrapper[4792]: I0319 18:05:16.811209 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6766d67c9f-bwz6s_053f5f35-2164-43eb-9223-f36a5de46700/barbican-worker-log/0.log" Mar 19 18:05:16 crc kubenswrapper[4792]: I0319 18:05:16.867586 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pczzc_9a911839-8c9b-43da-9ef6-eed89833426e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.034435 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05b1938b-461b-46fe-9fb9-28e17c7591bc/ceilometer-central-agent/1.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.090605 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05b1938b-461b-46fe-9fb9-28e17c7591bc/ceilometer-notification-agent/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.112665 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05b1938b-461b-46fe-9fb9-28e17c7591bc/proxy-httpd/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.120375 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05b1938b-461b-46fe-9fb9-28e17c7591bc/ceilometer-central-agent/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.266864 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05b1938b-461b-46fe-9fb9-28e17c7591bc/sg-core/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.335668 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9c0efe25-7ec1-4e80-80c8-812972764179/cinder-api-log/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.421674 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9c0efe25-7ec1-4e80-80c8-812972764179/cinder-api/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.550275 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4d9a5546-9c67-4684-8efd-c6c515dcb25d/cinder-scheduler/1.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.576037 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4d9a5546-9c67-4684-8efd-c6c515dcb25d/cinder-scheduler/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.671084 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4d9a5546-9c67-4684-8efd-c6c515dcb25d/probe/0.log" Mar 19 18:05:17 crc kubenswrapper[4792]: I0319 18:05:17.819773 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-q4pn8_4a1da42a-e4a2-4624-a6e1-57f2d83d331c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.058517 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-m6tk9_613fdf94-6607-47f4-aa3a-c99c1c500b9e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.061591 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-99s9r_e705cf76-1371-4677-80a0-8582f8695a29/init/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.250710 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-99s9r_e705cf76-1371-4677-80a0-8582f8695a29/init/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.274492 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-99s9r_e705cf76-1371-4677-80a0-8582f8695a29/dnsmasq-dns/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.395522 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-52p9h_fbd720bf-e288-4d7c-8c10-4f61bdfee093/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.500616 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d2e42385-c657-4f83-9f18-82209d504136/glance-httpd/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.535307 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d2e42385-c657-4f83-9f18-82209d504136/glance-log/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.730788 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_641b598a-d3b7-46bf-a1cc-aecf296a0afc/glance-httpd/0.log" Mar 19 18:05:18 crc kubenswrapper[4792]: I0319 18:05:18.760580 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_641b598a-d3b7-46bf-a1cc-aecf296a0afc/glance-log/0.log" Mar 19 18:05:19 crc kubenswrapper[4792]: I0319 18:05:19.329517 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6bf7b6486c-5hf6r_bf529b70-4061-4841-ae2a-553db6001e83/heat-api/0.log" Mar 19 18:05:19 crc kubenswrapper[4792]: I0319 18:05:19.418267 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6cf6b5d9c8-vct92_737815ac-f033-4b2b-be52-1418b60262ed/heat-engine/0.log" Mar 19 18:05:19 crc kubenswrapper[4792]: I0319 18:05:19.440677 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7d76c97877-4zbfk_e97170a9-754d-4a31-a542-fc6336f483bb/heat-cfnapi/0.log" Mar 19 18:05:19 crc kubenswrapper[4792]: I0319 18:05:19.712875 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kmbdl_f429099b-d78b-4ecc-9606-16da762eb608/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:19 crc kubenswrapper[4792]: I0319 18:05:19.913601 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mf8b8_e670e3cd-afa3-46ee-877d-1e0b61c4cbe7/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.032398 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29565721-7r4xx_3d3d5772-f179-4d7b-bbdd-5e6d7a276777/keystone-cron/0.log" Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.144174 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3c37ff21-a32e-4b93-9292-3648b8cc3a8e/kube-state-metrics/0.log" Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.230607 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.230659 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.230701 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.232023 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.232642 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" gracePeriod=600 Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.399817 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-8tbh9_2f6f7544-7d00-409e-baf3-688917113063/logging-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.546779 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-68456dfd85-xsh6s_a782fd7c-52d9-472c-98f4-a390ca0d94b6/keystone-api/0.log" Mar 19 18:05:20 crc kubenswrapper[4792]: I0319 18:05:20.719899 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ntbg4_697a022e-dbca-47a6-9034-353e9a5cecde/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:20 crc kubenswrapper[4792]: E0319 18:05:20.932552 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:05:21 crc kubenswrapper[4792]: I0319 18:05:21.153521 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_eef31711-ec31-4b3c-b5b5-e27be14b85ef/mysqld-exporter/0.log" Mar 19 18:05:21 crc kubenswrapper[4792]: I0319 18:05:21.239036 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" exitCode=0 Mar 19 18:05:21 crc kubenswrapper[4792]: I0319 18:05:21.239080 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db"} Mar 19 18:05:21 crc kubenswrapper[4792]: I0319 18:05:21.239113 4792 scope.go:117] "RemoveContainer" containerID="72469a44f2a722113f67c35613f06445f8eb914775e86b2980ab0a82d9718925" Mar 19 18:05:21 crc kubenswrapper[4792]: I0319 18:05:21.239993 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:05:21 crc kubenswrapper[4792]: E0319 18:05:21.240396 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:05:21 crc kubenswrapper[4792]: I0319 18:05:21.506899 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-678f6bc965-29ckw_b44482c7-fbab-40ba-b3ea-44a568d31edd/neutron-httpd/0.log" Mar 19 18:05:21 crc kubenswrapper[4792]: I0319 18:05:21.516181 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-678f6bc965-29ckw_b44482c7-fbab-40ba-b3ea-44a568d31edd/neutron-api/0.log" Mar 19 18:05:21 crc kubenswrapper[4792]: I0319 18:05:21.661258 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cn7d_9dca6ae2-e53e-4abe-83aa-a5d7967cfb2a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:22 crc kubenswrapper[4792]: I0319 18:05:22.166107 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_cd06887b-abf2-4787-9c4e-db0eed74d8ca/nova-api-log/0.log" Mar 19 18:05:22 crc kubenswrapper[4792]: I0319 18:05:22.232867 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d4354ccf-6194-4050-9e4e-342d090f707d/nova-cell0-conductor-conductor/0.log" Mar 19 18:05:22 crc kubenswrapper[4792]: I0319 18:05:22.580678 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_cd06887b-abf2-4787-9c4e-db0eed74d8ca/nova-api-api/0.log" Mar 19 18:05:23 crc kubenswrapper[4792]: I0319 18:05:23.011681 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fa6b639c-ce8f-4c29-a41e-28a31d1d1ba4/nova-cell1-novncproxy-novncproxy/0.log" Mar 19 18:05:23 crc kubenswrapper[4792]: I0319 18:05:23.012036 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1934f0b5-96a5-41e7-8b10-f06f65ec46e1/nova-cell1-conductor-conductor/0.log" Mar 19 18:05:23 crc kubenswrapper[4792]: I0319 18:05:23.371289 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d5ac4a92-3577-4b41-8b74-2598a64d131c/nova-metadata-log/0.log" Mar 19 18:05:23 crc kubenswrapper[4792]: I0319 18:05:23.746391 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xcfck_93d49310-d5d8-4e87-9162-296093e9adc5/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:23 crc kubenswrapper[4792]: I0319 18:05:23.754262 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7b384386-fda0-42ba-9b7b-ddb790da02b5/nova-scheduler-scheduler/0.log" Mar 19 18:05:23 crc kubenswrapper[4792]: I0319 18:05:23.848617 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d5ac4a92-3577-4b41-8b74-2598a64d131c/nova-metadata-metadata/0.log" Mar 19 18:05:23 crc kubenswrapper[4792]: I0319 18:05:23.854166 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_74993dec-a63b-4856-913e-39ec56f88058/mysql-bootstrap/0.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.021982 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_74993dec-a63b-4856-913e-39ec56f88058/mysql-bootstrap/0.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.096428 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_74993dec-a63b-4856-913e-39ec56f88058/galera/1.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.129088 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_74993dec-a63b-4856-913e-39ec56f88058/galera/0.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.234375 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575/mysql-bootstrap/0.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.455077 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575/mysql-bootstrap/0.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.561098 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575/galera/0.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.572074 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8f8c08a-0b4d-4c46-92a7-89f8e3aaa575/galera/1.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.675043 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d7885af7-09a3-4ea4-b59f-2de96f42fd0b/openstackclient/0.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.781968 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hkjvd_55ea5748-5aed-4ae4-a590-94a23170b160/ovn-controller/0.log" Mar 19 18:05:24 crc kubenswrapper[4792]: I0319 18:05:24.908920 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gbzs2_f7da4046-dc50-4b5b-a8bc-aecf5628f7ca/openstack-network-exporter/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.015590 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56rd9_bf820855-761d-475e-b080-1bf46ddddfd3/ovsdb-server-init/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.223675 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56rd9_bf820855-761d-475e-b080-1bf46ddddfd3/ovsdb-server/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.241631 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56rd9_bf820855-761d-475e-b080-1bf46ddddfd3/ovsdb-server-init/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.242649 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56rd9_bf820855-761d-475e-b080-1bf46ddddfd3/ovs-vswitchd/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.455702 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7c80c62c-85e8-4de7-984b-eac919232564/openstack-network-exporter/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.506434 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8mhcn_e472cc5f-822a-4e33-8f16-04cc02cbae89/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.635876 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7c80c62c-85e8-4de7-984b-eac919232564/ovn-northd/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.657104 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ce9f56e3-2b21-4854-ada6-3c81b790ccab/openstack-network-exporter/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.790458 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ce9f56e3-2b21-4854-ada6-3c81b790ccab/ovsdbserver-nb/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.898827 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2f154134-be00-48ab-a2b9-28cce44cc28a/openstack-network-exporter/0.log" Mar 19 18:05:25 crc kubenswrapper[4792]: I0319 18:05:25.950171 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2f154134-be00-48ab-a2b9-28cce44cc28a/ovsdbserver-sb/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.243162 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-75d8cc585d-x4dns_8ccda1e8-2f07-47f8-9887-c72ae0b11c89/placement-api/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.267891 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-75d8cc585d-x4dns_8ccda1e8-2f07-47f8-9887-c72ae0b11c89/placement-log/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.347889 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_94c78995-4f1f-4eca-a3fb-df83caafa647/init-config-reloader/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.471725 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_94c78995-4f1f-4eca-a3fb-df83caafa647/init-config-reloader/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.539589 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_94c78995-4f1f-4eca-a3fb-df83caafa647/prometheus/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.585801 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_94c78995-4f1f-4eca-a3fb-df83caafa647/config-reloader/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.636305 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_94c78995-4f1f-4eca-a3fb-df83caafa647/thanos-sidecar/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.805688 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c585782f-9e4f-4495-9e68-a10aa5fc90b0/setup-container/0.log" Mar 19 18:05:26 crc kubenswrapper[4792]: I0319 18:05:26.968636 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c585782f-9e4f-4495-9e68-a10aa5fc90b0/setup-container/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.046896 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c585782f-9e4f-4495-9e68-a10aa5fc90b0/rabbitmq/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.070767 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_915362ac-1fcd-4e45-9dea-c19af9bee06e/setup-container/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.283331 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_915362ac-1fcd-4e45-9dea-c19af9bee06e/setup-container/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.289554 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_47acf5ef-2d85-427c-81e6-8b8707505206/setup-container/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.318811 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_915362ac-1fcd-4e45-9dea-c19af9bee06e/rabbitmq/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.661370 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_ae048e02-6ff7-4fa8-81c0-57ab3c051662/setup-container/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.667795 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_47acf5ef-2d85-427c-81e6-8b8707505206/rabbitmq/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.669825 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_47acf5ef-2d85-427c-81e6-8b8707505206/setup-container/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.926550 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_ae048e02-6ff7-4fa8-81c0-57ab3c051662/setup-container/0.log" Mar 19 18:05:27 crc kubenswrapper[4792]: I0319 18:05:27.946124 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_ae048e02-6ff7-4fa8-81c0-57ab3c051662/rabbitmq/0.log" Mar 19 18:05:28 crc kubenswrapper[4792]: I0319 18:05:28.009622 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-hlj4l_32de0a99-ddf0-436b-ab7a-9223cc6d5de1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:28 crc kubenswrapper[4792]: I0319 18:05:28.132687 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zmbb4_e643e89e-4c37-4a1d-a4ee-9a47fab99015/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:28 crc kubenswrapper[4792]: I0319 18:05:28.269295 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4zljk_50866ef3-6742-4a83-a766-2c075a8d45cb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:28 crc kubenswrapper[4792]: I0319 18:05:28.474458 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jz4b8_82a4769a-60b7-4414-a78b-c51858d9746f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:28 crc kubenswrapper[4792]: I0319 18:05:28.532117 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6qj6v_4b88ce9a-9321-442b-ad61-bf8bdf229685/ssh-known-hosts-edpm-deployment/0.log" Mar 19 18:05:28 crc kubenswrapper[4792]: I0319 18:05:28.796205 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84cdd6c86c-5thrd_5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4/proxy-server/0.log" Mar 19 18:05:28 crc kubenswrapper[4792]: I0319 18:05:28.932129 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-44cgh_46fc890d-ef4d-49ec-8f22-5200a9ec6167/swift-ring-rebalance/0.log" Mar 19 18:05:28 crc kubenswrapper[4792]: I0319 18:05:28.956067 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84cdd6c86c-5thrd_5039c3a0-47c3-4b1e-94d4-61bd47a4f3f4/proxy-httpd/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.070986 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/account-auditor/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.120983 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/account-reaper/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.243293 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/container-auditor/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.252694 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/account-replicator/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.260100 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/account-server/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.367811 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/container-replicator/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.496683 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/container-server/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.512983 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/container-updater/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.513170 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/object-auditor/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.684091 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/object-expirer/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.701584 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/object-replicator/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.753317 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/object-server/0.log" Mar 19 18:05:29 crc kubenswrapper[4792]: I0319 18:05:29.792455 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/object-updater/0.log" Mar 19 18:05:30 crc kubenswrapper[4792]: I0319 18:05:30.551616 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/rsync/0.log" Mar 19 18:05:30 crc kubenswrapper[4792]: I0319 18:05:30.765022 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_797388ae-9d68-43cc-9e1b-063da11e1a5a/swift-recon-cron/0.log" Mar 19 18:05:31 crc kubenswrapper[4792]: I0319 18:05:31.364555 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4wd6v_1e475cad-bfb5-4b1c-b4fe-d3553a3c0d89/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:31 crc kubenswrapper[4792]: I0319 18:05:31.579324 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7f277fa1-4306-4605-b619-ab8b8df16ae5/memcached/0.log" Mar 19 18:05:31 crc kubenswrapper[4792]: I0319 18:05:31.624616 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-jg7rm_908382e0-1083-4b73-94f3-8be945974902/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:31 crc kubenswrapper[4792]: I0319 18:05:31.645397 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e8c31ae1-ea58-447c-b4cd-d9d121bd5185/test-operator-logs-container/0.log" Mar 19 18:05:31 crc kubenswrapper[4792]: I0319 18:05:31.853869 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d29d0577-d9f9-4402-a79d-06557b2f2826/tempest-tests-tempest-tests-runner/0.log" Mar 19 18:05:31 crc kubenswrapper[4792]: I0319 18:05:31.892147 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-krwg6_5c39cf60-90bf-4a71-99ca-1ce29cf5450d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 19 18:05:36 crc kubenswrapper[4792]: I0319 18:05:36.740567 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:05:36 crc kubenswrapper[4792]: E0319 18:05:36.741420 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:05:47 crc kubenswrapper[4792]: I0319 18:05:47.739789 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:05:47 crc kubenswrapper[4792]: E0319 18:05:47.741146 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:05:59 crc kubenswrapper[4792]: I0319 18:05:59.677574 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj_2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1/util/0.log" Mar 19 18:05:59 crc kubenswrapper[4792]: I0319 18:05:59.852069 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj_2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1/pull/0.log" Mar 19 18:05:59 crc kubenswrapper[4792]: I0319 18:05:59.883923 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj_2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1/util/0.log" Mar 19 18:05:59 crc kubenswrapper[4792]: I0319 18:05:59.933513 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj_2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1/pull/0.log" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.093241 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj_2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1/util/0.log" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.113829 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj_2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1/extract/0.log" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.155659 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565726-ts6td"] Mar 19 18:06:00 crc kubenswrapper[4792]: E0319 18:06:00.156331 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bad2ac5-bbc4-4f0b-9f83-ec01944321bb" containerName="container-00" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.156354 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bad2ac5-bbc4-4f0b-9f83-ec01944321bb" containerName="container-00" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.156667 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bad2ac5-bbc4-4f0b-9f83-ec01944321bb" containerName="container-00" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.157720 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565726-ts6td" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.160923 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.161032 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.161068 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.172807 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565726-ts6td"] Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.209814 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f91b0eb7ee7a9999c149511f0710d33dfe6ba4b4ba3bfc4ac72f12c14dtvtj_2d8fec59-30ff-4ec5-a64f-e7e49b58e6b1/pull/0.log" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.225812 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zqr\" (UniqueName: \"kubernetes.io/projected/e97b0dad-b721-4875-adcf-00c1de0a73c7-kube-api-access-c6zqr\") pod \"auto-csr-approver-29565726-ts6td\" (UID: \"e97b0dad-b721-4875-adcf-00c1de0a73c7\") " pod="openshift-infra/auto-csr-approver-29565726-ts6td" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.327530 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6zqr\" (UniqueName: \"kubernetes.io/projected/e97b0dad-b721-4875-adcf-00c1de0a73c7-kube-api-access-c6zqr\") pod \"auto-csr-approver-29565726-ts6td\" (UID: \"e97b0dad-b721-4875-adcf-00c1de0a73c7\") " pod="openshift-infra/auto-csr-approver-29565726-ts6td" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.351662 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6zqr\" (UniqueName: \"kubernetes.io/projected/e97b0dad-b721-4875-adcf-00c1de0a73c7-kube-api-access-c6zqr\") pod \"auto-csr-approver-29565726-ts6td\" (UID: \"e97b0dad-b721-4875-adcf-00c1de0a73c7\") " pod="openshift-infra/auto-csr-approver-29565726-ts6td" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.393291 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-2487f_9bb5702e-9617-4fb3-a13b-32aa8f7209bc/manager/0.log" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.486701 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565726-ts6td" Mar 19 18:06:00 crc kubenswrapper[4792]: I0319 18:06:00.739860 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:06:00 crc kubenswrapper[4792]: E0319 18:06:00.740445 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:06:01 crc kubenswrapper[4792]: I0319 18:06:01.468291 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-cn88d_c82a8813-bf57-4e7c-88fb-34b0ebee51be/manager/0.log" Mar 19 18:06:01 crc kubenswrapper[4792]: I0319 18:06:01.590884 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-8272z_29961080-94d4-4275-8d1a-baf1405cf2bb/manager/0.log" Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:01.999434 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-v6tfl_335bce01-df52-41ca-b47a-daa5e8ac917e/manager/0.log" Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:02.057680 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-zkx8w_bce0486f-f235-464e-acd7-bc8da076eebe/manager/0.log" Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:02.242518 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565726-ts6td"] Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:02.463857 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-lkhgd_b7f6258a-2ce1-482c-84ee-e869f191cb69/manager/1.log" Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:02.588092 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-lkhgd_b7f6258a-2ce1-482c-84ee-e869f191cb69/manager/0.log" Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:02.726994 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565726-ts6td" event={"ID":"e97b0dad-b721-4875-adcf-00c1de0a73c7","Type":"ContainerStarted","Data":"b86182711eab06676d08e977fe81ff2ef6b3d2be456acac322bb6822079948bb"} Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:02.740016 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7b9c774f96-p22vv_ae024059-6924-482c-88b6-c845e6932026/manager/0.log" Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:02.920895 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-s2pjr_80afdbc0-ff4c-4806-884d-ef3542b4de9c/manager/0.log" Mar 19 18:06:02 crc kubenswrapper[4792]: I0319 18:06:02.953275 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-h5w4z_d14a657c-5e70-4847-9b07-f85ce53d7757/manager/0.log" Mar 19 18:06:03 crc kubenswrapper[4792]: I0319 18:06:03.760427 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-rd29l_a1ed7ec7-1763-4593-a115-448e7da65482/manager/0.log" Mar 19 18:06:03 crc kubenswrapper[4792]: I0319 18:06:03.905636 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-b66p7_ca8f4495-eabc-425f-82dd-f3c5329de925/manager/0.log" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.056305 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-mdbhz_d89e09ff-441b-491e-98f7-9bf618322505/manager/0.log" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.240621 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-dz5pk_33f808bd-605c-41c7-94fb-92ceab7de0a9/manager/0.log" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.262380 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-mt22x_74eec49e-2c05-49ce-874b-654ec80018e6/manager/1.log" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.279427 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-mt22x_74eec49e-2c05-49ce-874b-654ec80018e6/manager/0.log" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.457100 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-89d64c458-lhq2p_29107ce9-41d6-410b-b256-723555fd6169/manager/0.log" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.615994 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7658474f4d-cpqrx_2f5d3346-4746-45e3-a73e-3d94d586e34d/operator/0.log" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.749294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565726-ts6td" event={"ID":"e97b0dad-b721-4875-adcf-00c1de0a73c7","Type":"ContainerStarted","Data":"121f58945364e31f15bc796e99de937f906894e025da9a25af86563b11e8218c"} Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.776094 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v9gs9_2d317332-2487-47d0-b052-eb6bd421c0d1/registry-server/1.log" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.777545 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565726-ts6td" podStartSLOduration=3.092881938 podStartE2EDuration="4.777532147s" podCreationTimestamp="2026-03-19 18:06:00 +0000 UTC" firstStartedPulling="2026-03-19 18:06:02.246714919 +0000 UTC m=+5125.392772459" lastFinishedPulling="2026-03-19 18:06:03.931365138 +0000 UTC m=+5127.077422668" observedRunningTime="2026-03-19 18:06:04.763677688 +0000 UTC m=+5127.909735228" watchObservedRunningTime="2026-03-19 18:06:04.777532147 +0000 UTC m=+5127.923589687" Mar 19 18:06:04 crc kubenswrapper[4792]: I0319 18:06:04.780225 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v9gs9_2d317332-2487-47d0-b052-eb6bd421c0d1/registry-server/0.log" Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.122109 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-7xldx_e4f68cf5-d501-4468-a9a4-b959ae49db87/manager/0.log" Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.132727 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-gkg4f_6832677c-467f-4786-b2f8-9c999c94f3ba/manager/0.log" Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.277925 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-x2pbv_5458fc2b-b774-488b-a5e0-1f66d2df8bfc/operator/0.log" Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.345281 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-p4npr_2dceb468-ce3f-4650-ae5e-694664ffb360/manager/0.log" Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.634826 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-7sklh_23c3a809-9d7c-4d60-be1f-2fbc1583e5d6/manager/0.log" Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.764707 4792 generic.go:334] "Generic (PLEG): container finished" podID="e97b0dad-b721-4875-adcf-00c1de0a73c7" containerID="121f58945364e31f15bc796e99de937f906894e025da9a25af86563b11e8218c" exitCode=0 Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.764758 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565726-ts6td" event={"ID":"e97b0dad-b721-4875-adcf-00c1de0a73c7","Type":"ContainerDied","Data":"121f58945364e31f15bc796e99de937f906894e025da9a25af86563b11e8218c"} Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.883646 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-rg6qq_1ca9378b-68d2-4281-b45a-7f40c30bae7c/manager/0.log" Mar 19 18:06:05 crc kubenswrapper[4792]: I0319 18:06:05.975487 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-78877dc965-lmkcj_91a44cfc-5acd-4b7c-814c-1521b5e2b85d/manager/0.log" Mar 19 18:06:06 crc kubenswrapper[4792]: I0319 18:06:06.272365 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c7d9f85c5-9nrsb_8da4bcd6-1b9f-450c-9ed2-34bd70bc6a8f/manager/0.log" Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.329129 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565726-ts6td" Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.448562 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6zqr\" (UniqueName: \"kubernetes.io/projected/e97b0dad-b721-4875-adcf-00c1de0a73c7-kube-api-access-c6zqr\") pod \"e97b0dad-b721-4875-adcf-00c1de0a73c7\" (UID: \"e97b0dad-b721-4875-adcf-00c1de0a73c7\") " Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.457097 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97b0dad-b721-4875-adcf-00c1de0a73c7-kube-api-access-c6zqr" (OuterVolumeSpecName: "kube-api-access-c6zqr") pod "e97b0dad-b721-4875-adcf-00c1de0a73c7" (UID: "e97b0dad-b721-4875-adcf-00c1de0a73c7"). InnerVolumeSpecName "kube-api-access-c6zqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.551076 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6zqr\" (UniqueName: \"kubernetes.io/projected/e97b0dad-b721-4875-adcf-00c1de0a73c7-kube-api-access-c6zqr\") on node \"crc\" DevicePath \"\"" Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.788928 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565726-ts6td" event={"ID":"e97b0dad-b721-4875-adcf-00c1de0a73c7","Type":"ContainerDied","Data":"b86182711eab06676d08e977fe81ff2ef6b3d2be456acac322bb6822079948bb"} Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.789268 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86182711eab06676d08e977fe81ff2ef6b3d2be456acac322bb6822079948bb" Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.788988 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565726-ts6td" Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.834059 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565720-27gv7"] Mar 19 18:06:07 crc kubenswrapper[4792]: I0319 18:06:07.846382 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565720-27gv7"] Mar 19 18:06:09 crc kubenswrapper[4792]: I0319 18:06:09.754662 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="274cacda-9f26-4e2a-8f66-6159174913b4" path="/var/lib/kubelet/pods/274cacda-9f26-4e2a-8f66-6159174913b4/volumes" Mar 19 18:06:15 crc kubenswrapper[4792]: I0319 18:06:15.740691 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:06:15 crc kubenswrapper[4792]: E0319 18:06:15.741804 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:06:26 crc kubenswrapper[4792]: I0319 18:06:26.069206 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-r6754_e7c8fc86-569f-425e-bb93-e75a206f1e68/control-plane-machine-set-operator/0.log" Mar 19 18:06:26 crc kubenswrapper[4792]: I0319 18:06:26.227048 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-28msx_79259d19-3c66-4aa6-baa6-666ee50833b2/kube-rbac-proxy/0.log" Mar 19 18:06:26 crc kubenswrapper[4792]: I0319 18:06:26.256197 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-28msx_79259d19-3c66-4aa6-baa6-666ee50833b2/machine-api-operator/0.log" Mar 19 18:06:29 crc kubenswrapper[4792]: I0319 18:06:29.740969 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:06:29 crc kubenswrapper[4792]: E0319 18:06:29.741497 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:06:41 crc kubenswrapper[4792]: I0319 18:06:41.740455 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:06:41 crc kubenswrapper[4792]: E0319 18:06:41.741316 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:06:42 crc kubenswrapper[4792]: I0319 18:06:42.607092 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-brbtt_7df83bb5-92b7-4c33-8907-29884370b54a/cert-manager-controller/0.log" Mar 19 18:06:42 crc kubenswrapper[4792]: I0319 18:06:42.783383 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-5pbbd_6982c21c-b400-4cf9-8107-b94b0166c7e1/cert-manager-cainjector/0.log" Mar 19 18:06:42 crc kubenswrapper[4792]: I0319 18:06:42.788972 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bgdjc_bf8a2335-56a0-4c34-ac01-e93578bf4cbd/cert-manager-webhook/1.log" Mar 19 18:06:42 crc kubenswrapper[4792]: I0319 18:06:42.854472 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bgdjc_bf8a2335-56a0-4c34-ac01-e93578bf4cbd/cert-manager-webhook/0.log" Mar 19 18:06:53 crc kubenswrapper[4792]: I0319 18:06:53.740323 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:06:53 crc kubenswrapper[4792]: E0319 18:06:53.741288 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:06:57 crc kubenswrapper[4792]: I0319 18:06:57.599338 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-b7gwq_c138fbf0-cc91-4f17-913e-87f6f4fcbbe8/nmstate-console-plugin/0.log" Mar 19 18:06:57 crc kubenswrapper[4792]: I0319 18:06:57.966189 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mmsmp_ae053ba9-b3d6-427d-b0e4-88e11ef2ba71/nmstate-handler/0.log" Mar 19 18:06:58 crc kubenswrapper[4792]: I0319 18:06:58.079441 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-954lx_4ab3aea4-d9a5-42e1-9c73-435f5c722cbb/kube-rbac-proxy/0.log" Mar 19 18:06:58 crc kubenswrapper[4792]: I0319 18:06:58.111258 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-954lx_4ab3aea4-d9a5-42e1-9c73-435f5c722cbb/nmstate-metrics/0.log" Mar 19 18:06:58 crc kubenswrapper[4792]: I0319 18:06:58.235895 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-gzvjk_ee465a76-03de-4983-9c1f-a064e12aed69/nmstate-operator/0.log" Mar 19 18:06:58 crc kubenswrapper[4792]: I0319 18:06:58.308997 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-sjth6_9d86fdf3-73d9-48f7-b44f-6182252fc4f8/nmstate-webhook/0.log" Mar 19 18:07:04 crc kubenswrapper[4792]: I0319 18:07:04.739933 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:07:04 crc kubenswrapper[4792]: E0319 18:07:04.741214 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:07:10 crc kubenswrapper[4792]: I0319 18:07:10.903403 4792 scope.go:117] "RemoveContainer" containerID="16ac63c661a32b64c2efdab46ceec4ce680f23db49ec154150cf1830083f1ed1" Mar 19 18:07:14 crc kubenswrapper[4792]: I0319 18:07:14.593346 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-795c7b44df-ssttv_1d900a68-83bb-40f6-8841-556f80c6ac78/kube-rbac-proxy/0.log" Mar 19 18:07:14 crc kubenswrapper[4792]: I0319 18:07:14.670809 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-795c7b44df-ssttv_1d900a68-83bb-40f6-8841-556f80c6ac78/manager/0.log" Mar 19 18:07:16 crc kubenswrapper[4792]: I0319 18:07:16.740812 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:07:16 crc kubenswrapper[4792]: E0319 18:07:16.742019 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:07:27 crc kubenswrapper[4792]: I0319 18:07:27.749208 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:07:27 crc kubenswrapper[4792]: E0319 18:07:27.749932 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:07:28 crc kubenswrapper[4792]: I0319 18:07:28.052263 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-8kls8_179c2f97-fb0f-424d-81fe-0d6dd21be292/prometheus-operator/0.log" Mar 19 18:07:28 crc kubenswrapper[4792]: I0319 18:07:28.194683 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_192e0659-f9b8-4855-b360-dce9a7978f38/prometheus-operator-admission-webhook/0.log" Mar 19 18:07:28 crc kubenswrapper[4792]: I0319 18:07:28.288979 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_1dc54b16-28bf-4658-91c0-5f0db7405082/prometheus-operator-admission-webhook/0.log" Mar 19 18:07:28 crc kubenswrapper[4792]: I0319 18:07:28.414954 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-625pf_7f7fc8f3-521e-42a6-95e0-18f42faf92c4/operator/1.log" Mar 19 18:07:28 crc kubenswrapper[4792]: I0319 18:07:28.468231 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-625pf_7f7fc8f3-521e-42a6-95e0-18f42faf92c4/operator/0.log" Mar 19 18:07:28 crc kubenswrapper[4792]: I0319 18:07:28.569275 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7f87b9b85b-49sm6_b958b34e-1fbb-4f66-bec7-130b5a0d2d9c/observability-ui-dashboards/0.log" Mar 19 18:07:28 crc kubenswrapper[4792]: I0319 18:07:28.713040 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5b64d67795-hhzt7_3477a59c-705b-42e9-bf3e-6ec92fecfc9e/perses-operator/0.log" Mar 19 18:07:39 crc kubenswrapper[4792]: I0319 18:07:39.740085 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:07:39 crc kubenswrapper[4792]: E0319 18:07:39.741045 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:07:45 crc kubenswrapper[4792]: I0319 18:07:45.516200 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-66689c4bbf-jdjgh_356f8438-fd17-4eed-8b43-92331b3a006c/cluster-logging-operator/0.log" Mar 19 18:07:45 crc kubenswrapper[4792]: I0319 18:07:45.531185 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-rqcnx_a879f867-df69-4895-836e-59a2c3333716/collector/0.log" Mar 19 18:07:45 crc kubenswrapper[4792]: I0319 18:07:45.754219 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_312a9ea1-8c2b-4b68-a4c2-55869981692e/loki-compactor/0.log" Mar 19 18:07:45 crc kubenswrapper[4792]: I0319 18:07:45.813923 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-9c6b6d984-lmw24_54c15722-d849-4290-bf53-39c4383912e4/loki-distributor/0.log" Mar 19 18:07:45 crc kubenswrapper[4792]: I0319 18:07:45.987485 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6c599cb-2gcbl_10c782de-230d-407d-9bb1-2a8a3a8da91c/gateway/0.log" Mar 19 18:07:46 crc kubenswrapper[4792]: I0319 18:07:46.004911 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6c599cb-2gcbl_10c782de-230d-407d-9bb1-2a8a3a8da91c/opa/0.log" Mar 19 18:07:46 crc kubenswrapper[4792]: I0319 18:07:46.202295 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6c599cb-vz8rf_1e5dbe4d-6818-4b0d-a372-b9574882f2ad/gateway/0.log" Mar 19 18:07:46 crc kubenswrapper[4792]: I0319 18:07:46.259815 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6c599cb-vz8rf_1e5dbe4d-6818-4b0d-a372-b9574882f2ad/opa/0.log" Mar 19 18:07:46 crc kubenswrapper[4792]: I0319 18:07:46.304760 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_d42fa7f9-ea92-480c-8de6-cf0b6b9219e6/loki-index-gateway/0.log" Mar 19 18:07:46 crc kubenswrapper[4792]: I0319 18:07:46.500471 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_b90cdc46-8fb4-424e-be18-e675309acdff/loki-ingester/0.log" Mar 19 18:07:46 crc kubenswrapper[4792]: I0319 18:07:46.546617 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-6dcbdf8bb8-ljg58_78b39436-d594-47d8-9e75-8470495398ac/loki-querier/0.log" Mar 19 18:07:47 crc kubenswrapper[4792]: I0319 18:07:47.353464 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-ff66c4dc9-z95d6_03d0f2d0-18de-48b9-ba57-85e09753dccf/loki-query-frontend/0.log" Mar 19 18:07:50 crc kubenswrapper[4792]: I0319 18:07:50.740549 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:07:50 crc kubenswrapper[4792]: E0319 18:07:50.741404 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.149940 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565728-srsll"] Mar 19 18:08:00 crc kubenswrapper[4792]: E0319 18:08:00.151015 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97b0dad-b721-4875-adcf-00c1de0a73c7" containerName="oc" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.151027 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97b0dad-b721-4875-adcf-00c1de0a73c7" containerName="oc" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.151317 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97b0dad-b721-4875-adcf-00c1de0a73c7" containerName="oc" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.152178 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565728-srsll" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.156777 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.157063 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.157946 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.165020 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565728-srsll"] Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.281576 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrnpd\" (UniqueName: \"kubernetes.io/projected/fb06173e-b912-462b-813a-9822e6fbd709-kube-api-access-zrnpd\") pod \"auto-csr-approver-29565728-srsll\" (UID: \"fb06173e-b912-462b-813a-9822e6fbd709\") " pod="openshift-infra/auto-csr-approver-29565728-srsll" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.384196 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrnpd\" (UniqueName: \"kubernetes.io/projected/fb06173e-b912-462b-813a-9822e6fbd709-kube-api-access-zrnpd\") pod \"auto-csr-approver-29565728-srsll\" (UID: \"fb06173e-b912-462b-813a-9822e6fbd709\") " pod="openshift-infra/auto-csr-approver-29565728-srsll" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.408564 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrnpd\" (UniqueName: \"kubernetes.io/projected/fb06173e-b912-462b-813a-9822e6fbd709-kube-api-access-zrnpd\") pod \"auto-csr-approver-29565728-srsll\" (UID: \"fb06173e-b912-462b-813a-9822e6fbd709\") " pod="openshift-infra/auto-csr-approver-29565728-srsll" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.476593 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565728-srsll" Mar 19 18:08:00 crc kubenswrapper[4792]: I0319 18:08:00.992760 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565728-srsll"] Mar 19 18:08:01 crc kubenswrapper[4792]: I0319 18:08:01.740229 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:08:01 crc kubenswrapper[4792]: E0319 18:08:01.740978 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:08:02 crc kubenswrapper[4792]: I0319 18:08:02.036058 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565728-srsll" event={"ID":"fb06173e-b912-462b-813a-9822e6fbd709","Type":"ContainerStarted","Data":"874bb1f70706cac74b8ee0badae637881d1a2229433613740ddab91c937fb360"} Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.066220 4792 generic.go:334] "Generic (PLEG): container finished" podID="fb06173e-b912-462b-813a-9822e6fbd709" containerID="ac1004dd47948892e2346dc119528fc562aa76d7c31b6a833d9cdf2007bfefd3" exitCode=0 Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.066493 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565728-srsll" event={"ID":"fb06173e-b912-462b-813a-9822e6fbd709","Type":"ContainerDied","Data":"ac1004dd47948892e2346dc119528fc562aa76d7c31b6a833d9cdf2007bfefd3"} Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.175312 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gdvnw_69f67eea-c8b3-40a4-891a-4c15c31cb410/kube-rbac-proxy/0.log" Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.290104 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gdvnw_69f67eea-c8b3-40a4-891a-4c15c31cb410/controller/1.log" Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.380800 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gdvnw_69f67eea-c8b3-40a4-891a-4c15c31cb410/controller/0.log" Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.503321 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-frr-files/0.log" Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.697522 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-reloader/0.log" Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.716602 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-metrics/0.log" Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.796586 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-frr-files/0.log" Mar 19 18:08:03 crc kubenswrapper[4792]: I0319 18:08:03.812473 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-reloader/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.021524 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-frr-files/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.074796 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-metrics/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.107899 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-reloader/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.252036 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-metrics/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.486128 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-reloader/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.495035 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-metrics/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.514921 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565728-srsll" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.520134 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/controller/1.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.550319 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/cp-frr-files/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.603549 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrnpd\" (UniqueName: \"kubernetes.io/projected/fb06173e-b912-462b-813a-9822e6fbd709-kube-api-access-zrnpd\") pod \"fb06173e-b912-462b-813a-9822e6fbd709\" (UID: \"fb06173e-b912-462b-813a-9822e6fbd709\") " Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.611052 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb06173e-b912-462b-813a-9822e6fbd709-kube-api-access-zrnpd" (OuterVolumeSpecName: "kube-api-access-zrnpd") pod "fb06173e-b912-462b-813a-9822e6fbd709" (UID: "fb06173e-b912-462b-813a-9822e6fbd709"). InnerVolumeSpecName "kube-api-access-zrnpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.706122 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrnpd\" (UniqueName: \"kubernetes.io/projected/fb06173e-b912-462b-813a-9822e6fbd709-kube-api-access-zrnpd\") on node \"crc\" DevicePath \"\"" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.787332 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/controller/0.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.848545 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/frr/1.log" Mar 19 18:08:04 crc kubenswrapper[4792]: I0319 18:08:04.876480 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/frr-metrics/0.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.050945 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/kube-rbac-proxy/0.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.093508 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565728-srsll" event={"ID":"fb06173e-b912-462b-813a-9822e6fbd709","Type":"ContainerDied","Data":"874bb1f70706cac74b8ee0badae637881d1a2229433613740ddab91c937fb360"} Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.093543 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="874bb1f70706cac74b8ee0badae637881d1a2229433613740ddab91c937fb360" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.093585 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565728-srsll" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.133378 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/kube-rbac-proxy-frr/0.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.182361 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/reloader/0.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.336881 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-kzh4h_30ef8aea-daf2-4351-bf36-a8238738129a/frr-k8s-webhook-server/1.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.406242 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-kzh4h_30ef8aea-daf2-4351-bf36-a8238738129a/frr-k8s-webhook-server/0.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.613169 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565722-4zmxd"] Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.614131 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c96bc4ccc-fw8z7_e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76/manager/1.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.643267 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565722-4zmxd"] Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.683944 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c96bc4ccc-fw8z7_e9074fa9-9fdb-406d-9f1a-d52d5bdb5b76/manager/0.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.755078 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582089ff-ec32-4c78-bb81-d650559d9659" path="/var/lib/kubelet/pods/582089ff-ec32-4c78-bb81-d650559d9659/volumes" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.823609 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8559bd9b58-4dc8w_4b613458-1b90-42f8-8d32-d3017f189770/webhook-server/1.log" Mar 19 18:08:05 crc kubenswrapper[4792]: I0319 18:08:05.953254 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8559bd9b58-4dc8w_4b613458-1b90-42f8-8d32-d3017f189770/webhook-server/0.log" Mar 19 18:08:06 crc kubenswrapper[4792]: I0319 18:08:06.348153 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6cld2_ee375e3b-1376-4cd4-93b7-da4316b203a7/kube-rbac-proxy/0.log" Mar 19 18:08:06 crc kubenswrapper[4792]: I0319 18:08:06.398191 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nd7zd_81f1b6c9-e921-49a2-8149-767fe360d7d0/frr/0.log" Mar 19 18:08:06 crc kubenswrapper[4792]: I0319 18:08:06.786875 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6cld2_ee375e3b-1376-4cd4-93b7-da4316b203a7/speaker/1.log" Mar 19 18:08:07 crc kubenswrapper[4792]: I0319 18:08:07.061318 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6cld2_ee375e3b-1376-4cd4-93b7-da4316b203a7/speaker/0.log" Mar 19 18:08:13 crc kubenswrapper[4792]: I0319 18:08:13.740982 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:08:13 crc kubenswrapper[4792]: E0319 18:08:13.741725 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:08:22 crc kubenswrapper[4792]: I0319 18:08:22.591935 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz_e62f40d4-2108-4fee-a475-0ac60aa24d1b/util/0.log" Mar 19 18:08:22 crc kubenswrapper[4792]: I0319 18:08:22.743380 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz_e62f40d4-2108-4fee-a475-0ac60aa24d1b/util/0.log" Mar 19 18:08:22 crc kubenswrapper[4792]: I0319 18:08:22.827232 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz_e62f40d4-2108-4fee-a475-0ac60aa24d1b/pull/0.log" Mar 19 18:08:22 crc kubenswrapper[4792]: I0319 18:08:22.848873 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz_e62f40d4-2108-4fee-a475-0ac60aa24d1b/pull/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.006278 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz_e62f40d4-2108-4fee-a475-0ac60aa24d1b/pull/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.042915 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz_e62f40d4-2108-4fee-a475-0ac60aa24d1b/util/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.044822 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874qdmxz_e62f40d4-2108-4fee-a475-0ac60aa24d1b/extract/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.215402 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w_f7e02e6a-ee2d-4d53-a972-ddfaf33a218b/util/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.348245 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w_f7e02e6a-ee2d-4d53-a972-ddfaf33a218b/pull/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.355538 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w_f7e02e6a-ee2d-4d53-a972-ddfaf33a218b/pull/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.379950 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w_f7e02e6a-ee2d-4d53-a972-ddfaf33a218b/util/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.571058 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w_f7e02e6a-ee2d-4d53-a972-ddfaf33a218b/util/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.612015 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w_f7e02e6a-ee2d-4d53-a972-ddfaf33a218b/extract/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.643002 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1wqn2w_f7e02e6a-ee2d-4d53-a972-ddfaf33a218b/pull/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.775271 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7_448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c/util/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.980328 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7_448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c/util/0.log" Mar 19 18:08:23 crc kubenswrapper[4792]: I0319 18:08:23.985285 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7_448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c/pull/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.010833 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7_448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c/pull/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.211900 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7_448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c/util/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.216498 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7_448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c/extract/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.240758 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d56n4p7_448b4ad0-4d6b-4a1b-b39c-40d0c85a1e4c/pull/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.405118 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx_6c058218-adf4-41fb-ad6f-1ad65b5db417/util/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.589931 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx_6c058218-adf4-41fb-ad6f-1ad65b5db417/pull/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.631576 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx_6c058218-adf4-41fb-ad6f-1ad65b5db417/util/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.632465 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx_6c058218-adf4-41fb-ad6f-1ad65b5db417/pull/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.813829 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx_6c058218-adf4-41fb-ad6f-1ad65b5db417/pull/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.824939 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx_6c058218-adf4-41fb-ad6f-1ad65b5db417/extract/0.log" Mar 19 18:08:24 crc kubenswrapper[4792]: I0319 18:08:24.835805 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cqpbcx_6c058218-adf4-41fb-ad6f-1ad65b5db417/util/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.030744 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8_d9eac154-f601-45a7-9d86-07e01fe01bf1/util/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.221720 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8_d9eac154-f601-45a7-9d86-07e01fe01bf1/pull/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.228152 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8_d9eac154-f601-45a7-9d86-07e01fe01bf1/util/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.245594 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8_d9eac154-f601-45a7-9d86-07e01fe01bf1/pull/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.431401 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8_d9eac154-f601-45a7-9d86-07e01fe01bf1/pull/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.438937 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8_d9eac154-f601-45a7-9d86-07e01fe01bf1/util/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.491926 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397267pxn8_d9eac154-f601-45a7-9d86-07e01fe01bf1/extract/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.681052 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8m42q_84114ace-d7fd-41a3-9fa6-87df44501023/extract-utilities/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.895473 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8m42q_84114ace-d7fd-41a3-9fa6-87df44501023/extract-content/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.898207 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8m42q_84114ace-d7fd-41a3-9fa6-87df44501023/extract-utilities/0.log" Mar 19 18:08:25 crc kubenswrapper[4792]: I0319 18:08:25.977064 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8m42q_84114ace-d7fd-41a3-9fa6-87df44501023/extract-content/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.083927 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8m42q_84114ace-d7fd-41a3-9fa6-87df44501023/extract-utilities/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.103481 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8m42q_84114ace-d7fd-41a3-9fa6-87df44501023/extract-content/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.192754 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8m42q_84114ace-d7fd-41a3-9fa6-87df44501023/registry-server/1.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.393602 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gb64t_7a6583ed-1c62-448f-98f6-6055fe84c457/extract-utilities/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.644268 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gb64t_7a6583ed-1c62-448f-98f6-6055fe84c457/extract-content/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.684242 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gb64t_7a6583ed-1c62-448f-98f6-6055fe84c457/extract-utilities/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.701416 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gb64t_7a6583ed-1c62-448f-98f6-6055fe84c457/extract-content/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.812946 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8m42q_84114ace-d7fd-41a3-9fa6-87df44501023/registry-server/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.928782 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gb64t_7a6583ed-1c62-448f-98f6-6055fe84c457/extract-content/0.log" Mar 19 18:08:26 crc kubenswrapper[4792]: I0319 18:08:26.936984 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gb64t_7a6583ed-1c62-448f-98f6-6055fe84c457/extract-utilities/0.log" Mar 19 18:08:27 crc kubenswrapper[4792]: I0319 18:08:27.116642 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gb64t_7a6583ed-1c62-448f-98f6-6055fe84c457/registry-server/1.log" Mar 19 18:08:27 crc kubenswrapper[4792]: I0319 18:08:27.200946 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vswr4_a9918a46-a0e8-400e-bd0c-0af4b0d05339/marketplace-operator/0.log" Mar 19 18:08:27 crc kubenswrapper[4792]: I0319 18:08:27.395932 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5hq59_380412c4-57ca-4428-838c-ab93fc6c71cc/extract-utilities/0.log" Mar 19 18:08:27 crc kubenswrapper[4792]: I0319 18:08:27.631952 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5hq59_380412c4-57ca-4428-838c-ab93fc6c71cc/extract-content/0.log" Mar 19 18:08:27 crc kubenswrapper[4792]: I0319 18:08:27.747467 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:08:27 crc kubenswrapper[4792]: E0319 18:08:27.747851 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:08:27 crc kubenswrapper[4792]: I0319 18:08:27.763059 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5hq59_380412c4-57ca-4428-838c-ab93fc6c71cc/extract-utilities/0.log" Mar 19 18:08:27 crc kubenswrapper[4792]: I0319 18:08:27.804344 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gb64t_7a6583ed-1c62-448f-98f6-6055fe84c457/registry-server/0.log" Mar 19 18:08:27 crc kubenswrapper[4792]: I0319 18:08:27.975762 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5hq59_380412c4-57ca-4428-838c-ab93fc6c71cc/extract-content/0.log" Mar 19 18:08:28 crc kubenswrapper[4792]: I0319 18:08:28.140781 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5hq59_380412c4-57ca-4428-838c-ab93fc6c71cc/extract-utilities/0.log" Mar 19 18:08:28 crc kubenswrapper[4792]: I0319 18:08:28.203293 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5hq59_380412c4-57ca-4428-838c-ab93fc6c71cc/extract-content/0.log" Mar 19 18:08:28 crc kubenswrapper[4792]: I0319 18:08:28.327149 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5hq59_380412c4-57ca-4428-838c-ab93fc6c71cc/registry-server/1.log" Mar 19 18:08:28 crc kubenswrapper[4792]: I0319 18:08:28.386924 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5hq59_380412c4-57ca-4428-838c-ab93fc6c71cc/registry-server/0.log" Mar 19 18:08:28 crc kubenswrapper[4792]: I0319 18:08:28.400922 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h7gpk_9faaddd3-77ad-4bc9-97ce-21a824aeb1c0/extract-utilities/0.log" Mar 19 18:08:28 crc kubenswrapper[4792]: I0319 18:08:28.550998 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h7gpk_9faaddd3-77ad-4bc9-97ce-21a824aeb1c0/extract-utilities/0.log" Mar 19 18:08:29 crc kubenswrapper[4792]: I0319 18:08:29.283762 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h7gpk_9faaddd3-77ad-4bc9-97ce-21a824aeb1c0/extract-content/0.log" Mar 19 18:08:29 crc kubenswrapper[4792]: I0319 18:08:29.395616 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h7gpk_9faaddd3-77ad-4bc9-97ce-21a824aeb1c0/extract-content/0.log" Mar 19 18:08:29 crc kubenswrapper[4792]: I0319 18:08:29.633749 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h7gpk_9faaddd3-77ad-4bc9-97ce-21a824aeb1c0/extract-utilities/0.log" Mar 19 18:08:29 crc kubenswrapper[4792]: I0319 18:08:29.656819 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h7gpk_9faaddd3-77ad-4bc9-97ce-21a824aeb1c0/extract-content/0.log" Mar 19 18:08:29 crc kubenswrapper[4792]: I0319 18:08:29.717367 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h7gpk_9faaddd3-77ad-4bc9-97ce-21a824aeb1c0/registry-server/1.log" Mar 19 18:08:30 crc kubenswrapper[4792]: I0319 18:08:30.130021 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h7gpk_9faaddd3-77ad-4bc9-97ce-21a824aeb1c0/registry-server/0.log" Mar 19 18:08:40 crc kubenswrapper[4792]: I0319 18:08:40.739814 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:08:40 crc kubenswrapper[4792]: E0319 18:08:40.740599 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:08:44 crc kubenswrapper[4792]: I0319 18:08:44.856953 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bdd76dff8-kfcpz_1dc54b16-28bf-4658-91c0-5f0db7405082/prometheus-operator-admission-webhook/0.log" Mar 19 18:08:44 crc kubenswrapper[4792]: I0319 18:08:44.870274 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bdd76dff8-8tmbk_192e0659-f9b8-4855-b360-dce9a7978f38/prometheus-operator-admission-webhook/0.log" Mar 19 18:08:44 crc kubenswrapper[4792]: I0319 18:08:44.947091 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-8kls8_179c2f97-fb0f-424d-81fe-0d6dd21be292/prometheus-operator/0.log" Mar 19 18:08:45 crc kubenswrapper[4792]: I0319 18:08:45.079543 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-625pf_7f7fc8f3-521e-42a6-95e0-18f42faf92c4/operator/0.log" Mar 19 18:08:45 crc kubenswrapper[4792]: I0319 18:08:45.088737 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-625pf_7f7fc8f3-521e-42a6-95e0-18f42faf92c4/operator/1.log" Mar 19 18:08:45 crc kubenswrapper[4792]: I0319 18:08:45.137309 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7f87b9b85b-49sm6_b958b34e-1fbb-4f66-bec7-130b5a0d2d9c/observability-ui-dashboards/0.log" Mar 19 18:08:45 crc kubenswrapper[4792]: I0319 18:08:45.174441 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5b64d67795-hhzt7_3477a59c-705b-42e9-bf3e-6ec92fecfc9e/perses-operator/0.log" Mar 19 18:08:52 crc kubenswrapper[4792]: I0319 18:08:52.740364 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:08:52 crc kubenswrapper[4792]: E0319 18:08:52.741227 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:08:59 crc kubenswrapper[4792]: I0319 18:08:59.245125 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-795c7b44df-ssttv_1d900a68-83bb-40f6-8841-556f80c6ac78/kube-rbac-proxy/0.log" Mar 19 18:08:59 crc kubenswrapper[4792]: I0319 18:08:59.988071 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-795c7b44df-ssttv_1d900a68-83bb-40f6-8841-556f80c6ac78/manager/0.log" Mar 19 18:09:04 crc kubenswrapper[4792]: I0319 18:09:04.740418 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:09:04 crc kubenswrapper[4792]: E0319 18:09:04.741248 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:09:11 crc kubenswrapper[4792]: I0319 18:09:11.092442 4792 scope.go:117] "RemoveContainer" containerID="21b14d90ba1210bbdea112cea9ea38a3a20d86cc0ae8b6b007e8e9a5938a896a" Mar 19 18:09:18 crc kubenswrapper[4792]: I0319 18:09:18.739889 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:09:18 crc kubenswrapper[4792]: E0319 18:09:18.740877 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:09:33 crc kubenswrapper[4792]: I0319 18:09:33.744151 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:09:33 crc kubenswrapper[4792]: E0319 18:09:33.744951 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:09:47 crc kubenswrapper[4792]: I0319 18:09:47.761616 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:09:47 crc kubenswrapper[4792]: E0319 18:09:47.763138 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.156208 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565730-mnd97"] Mar 19 18:10:00 crc kubenswrapper[4792]: E0319 18:10:00.157406 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb06173e-b912-462b-813a-9822e6fbd709" containerName="oc" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.157425 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb06173e-b912-462b-813a-9822e6fbd709" containerName="oc" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.157660 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb06173e-b912-462b-813a-9822e6fbd709" containerName="oc" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.158519 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565730-mnd97" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.162542 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.162814 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.167823 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.168675 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565730-mnd97"] Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.177175 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx5g6\" (UniqueName: \"kubernetes.io/projected/4db900e0-0a74-41ca-b587-5d2623a492e3-kube-api-access-rx5g6\") pod \"auto-csr-approver-29565730-mnd97\" (UID: \"4db900e0-0a74-41ca-b587-5d2623a492e3\") " pod="openshift-infra/auto-csr-approver-29565730-mnd97" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.280206 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx5g6\" (UniqueName: \"kubernetes.io/projected/4db900e0-0a74-41ca-b587-5d2623a492e3-kube-api-access-rx5g6\") pod \"auto-csr-approver-29565730-mnd97\" (UID: \"4db900e0-0a74-41ca-b587-5d2623a492e3\") " pod="openshift-infra/auto-csr-approver-29565730-mnd97" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.521074 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx5g6\" (UniqueName: \"kubernetes.io/projected/4db900e0-0a74-41ca-b587-5d2623a492e3-kube-api-access-rx5g6\") pod \"auto-csr-approver-29565730-mnd97\" (UID: \"4db900e0-0a74-41ca-b587-5d2623a492e3\") " pod="openshift-infra/auto-csr-approver-29565730-mnd97" Mar 19 18:10:00 crc kubenswrapper[4792]: I0319 18:10:00.787636 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565730-mnd97" Mar 19 18:10:01 crc kubenswrapper[4792]: I0319 18:10:01.329960 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565730-mnd97"] Mar 19 18:10:01 crc kubenswrapper[4792]: I0319 18:10:01.341219 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 18:10:01 crc kubenswrapper[4792]: I0319 18:10:01.740331 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:10:01 crc kubenswrapper[4792]: E0319 18:10:01.740605 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:10:01 crc kubenswrapper[4792]: I0319 18:10:01.809343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565730-mnd97" event={"ID":"4db900e0-0a74-41ca-b587-5d2623a492e3","Type":"ContainerStarted","Data":"8ae421f77a5083df194ac1b257e434432714d6008cc6822c233044d6835ae221"} Mar 19 18:10:03 crc kubenswrapper[4792]: I0319 18:10:03.830086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565730-mnd97" event={"ID":"4db900e0-0a74-41ca-b587-5d2623a492e3","Type":"ContainerStarted","Data":"de2f68e393ea1e9cfce0e834685ad7ad38850d979530524cae69e22177f779cc"} Mar 19 18:10:03 crc kubenswrapper[4792]: I0319 18:10:03.849193 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565730-mnd97" podStartSLOduration=2.296694731 podStartE2EDuration="3.849174506s" podCreationTimestamp="2026-03-19 18:10:00 +0000 UTC" firstStartedPulling="2026-03-19 18:10:01.33926084 +0000 UTC m=+5364.485318380" lastFinishedPulling="2026-03-19 18:10:02.891740605 +0000 UTC m=+5366.037798155" observedRunningTime="2026-03-19 18:10:03.844087378 +0000 UTC m=+5366.990144948" watchObservedRunningTime="2026-03-19 18:10:03.849174506 +0000 UTC m=+5366.995232046" Mar 19 18:10:04 crc kubenswrapper[4792]: I0319 18:10:04.844645 4792 generic.go:334] "Generic (PLEG): container finished" podID="4db900e0-0a74-41ca-b587-5d2623a492e3" containerID="de2f68e393ea1e9cfce0e834685ad7ad38850d979530524cae69e22177f779cc" exitCode=0 Mar 19 18:10:04 crc kubenswrapper[4792]: I0319 18:10:04.844900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565730-mnd97" event={"ID":"4db900e0-0a74-41ca-b587-5d2623a492e3","Type":"ContainerDied","Data":"de2f68e393ea1e9cfce0e834685ad7ad38850d979530524cae69e22177f779cc"} Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.355247 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565730-mnd97" Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.536083 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx5g6\" (UniqueName: \"kubernetes.io/projected/4db900e0-0a74-41ca-b587-5d2623a492e3-kube-api-access-rx5g6\") pod \"4db900e0-0a74-41ca-b587-5d2623a492e3\" (UID: \"4db900e0-0a74-41ca-b587-5d2623a492e3\") " Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.542645 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db900e0-0a74-41ca-b587-5d2623a492e3-kube-api-access-rx5g6" (OuterVolumeSpecName: "kube-api-access-rx5g6") pod "4db900e0-0a74-41ca-b587-5d2623a492e3" (UID: "4db900e0-0a74-41ca-b587-5d2623a492e3"). InnerVolumeSpecName "kube-api-access-rx5g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.640080 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx5g6\" (UniqueName: \"kubernetes.io/projected/4db900e0-0a74-41ca-b587-5d2623a492e3-kube-api-access-rx5g6\") on node \"crc\" DevicePath \"\"" Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.876911 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565730-mnd97" event={"ID":"4db900e0-0a74-41ca-b587-5d2623a492e3","Type":"ContainerDied","Data":"8ae421f77a5083df194ac1b257e434432714d6008cc6822c233044d6835ae221"} Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.877202 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ae421f77a5083df194ac1b257e434432714d6008cc6822c233044d6835ae221" Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.877012 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565730-mnd97" Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.959799 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565724-45hfn"] Mar 19 18:10:06 crc kubenswrapper[4792]: I0319 18:10:06.985215 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565724-45hfn"] Mar 19 18:10:07 crc kubenswrapper[4792]: I0319 18:10:07.772597 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf652b5-229b-4714-b196-5f51ff19afe2" path="/var/lib/kubelet/pods/5cf652b5-229b-4714-b196-5f51ff19afe2/volumes" Mar 19 18:10:11 crc kubenswrapper[4792]: I0319 18:10:11.259616 4792 scope.go:117] "RemoveContainer" containerID="33118fa182d2393d6323b272d12b524383f4f703b5ba06c7b571b4eef1447489" Mar 19 18:10:11 crc kubenswrapper[4792]: I0319 18:10:11.933210 4792 scope.go:117] "RemoveContainer" containerID="9fa1b015a77b00e4868129e826b20cb301d37c124354d04e5a258f708ec43675" Mar 19 18:10:15 crc kubenswrapper[4792]: I0319 18:10:15.740220 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:10:15 crc kubenswrapper[4792]: E0319 18:10:15.741151 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szhln_openshift-machine-config-operator(a9e72e9a-50c3-41db-8657-7ae683c7c13a)\"" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" Mar 19 18:10:28 crc kubenswrapper[4792]: I0319 18:10:28.742053 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:10:29 crc kubenswrapper[4792]: I0319 18:10:29.164594 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"57d15e901fe2337fd8a05868cdd2220643fc733c1e74b8172874bbeed3be0673"} Mar 19 18:10:52 crc kubenswrapper[4792]: I0319 18:10:52.476907 4792 generic.go:334] "Generic (PLEG): container finished" podID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerID="803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b" exitCode=0 Mar 19 18:10:52 crc kubenswrapper[4792]: I0319 18:10:52.477075 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" event={"ID":"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef","Type":"ContainerDied","Data":"803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b"} Mar 19 18:10:52 crc kubenswrapper[4792]: I0319 18:10:52.478324 4792 scope.go:117] "RemoveContainer" containerID="803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b" Mar 19 18:10:53 crc kubenswrapper[4792]: I0319 18:10:53.181484 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdjk2_must-gather-tf6s4_f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef/gather/0.log" Mar 19 18:11:02 crc kubenswrapper[4792]: I0319 18:11:02.596784 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdjk2/must-gather-tf6s4"] Mar 19 18:11:02 crc kubenswrapper[4792]: I0319 18:11:02.599000 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" podUID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerName="copy" containerID="cri-o://ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85" gracePeriod=2 Mar 19 18:11:02 crc kubenswrapper[4792]: I0319 18:11:02.617494 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdjk2/must-gather-tf6s4"] Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.109368 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdjk2_must-gather-tf6s4_f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef/copy/0.log" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.110723 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.210047 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-must-gather-output\") pod \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\" (UID: \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\") " Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.210118 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z892c\" (UniqueName: \"kubernetes.io/projected/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-kube-api-access-z892c\") pod \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\" (UID: \"f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef\") " Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.219267 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-kube-api-access-z892c" (OuterVolumeSpecName: "kube-api-access-z892c") pod "f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" (UID: "f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef"). InnerVolumeSpecName "kube-api-access-z892c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.313654 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z892c\" (UniqueName: \"kubernetes.io/projected/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-kube-api-access-z892c\") on node \"crc\" DevicePath \"\"" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.444448 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" (UID: "f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.526862 4792 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.626037 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdjk2_must-gather-tf6s4_f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef/copy/0.log" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.627538 4792 generic.go:334] "Generic (PLEG): container finished" podID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerID="ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85" exitCode=143 Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.627653 4792 scope.go:117] "RemoveContainer" containerID="ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.627868 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdjk2/must-gather-tf6s4" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.657560 4792 scope.go:117] "RemoveContainer" containerID="803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.711248 4792 scope.go:117] "RemoveContainer" containerID="ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85" Mar 19 18:11:03 crc kubenswrapper[4792]: E0319 18:11:03.712253 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85\": container with ID starting with ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85 not found: ID does not exist" containerID="ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.712287 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85"} err="failed to get container status \"ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85\": rpc error: code = NotFound desc = could not find container \"ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85\": container with ID starting with ed6e428e3a402a99c2f4371752d63f7feeb4f08f7cb406ce94b166be134d2a85 not found: ID does not exist" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.712307 4792 scope.go:117] "RemoveContainer" containerID="803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b" Mar 19 18:11:03 crc kubenswrapper[4792]: E0319 18:11:03.712590 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b\": container with ID starting with 803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b not found: ID does not exist" containerID="803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.712630 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b"} err="failed to get container status \"803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b\": rpc error: code = NotFound desc = could not find container \"803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b\": container with ID starting with 803ba1cd9d943089e5f61fc67fe2668781685f17c6f58ed09c849fc764bf620b not found: ID does not exist" Mar 19 18:11:03 crc kubenswrapper[4792]: I0319 18:11:03.766473 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" path="/var/lib/kubelet/pods/f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef/volumes" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.927392 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p5w8d"] Mar 19 18:11:38 crc kubenswrapper[4792]: E0319 18:11:38.930294 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerName="copy" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.930350 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerName="copy" Mar 19 18:11:38 crc kubenswrapper[4792]: E0319 18:11:38.930381 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerName="gather" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.930390 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerName="gather" Mar 19 18:11:38 crc kubenswrapper[4792]: E0319 18:11:38.930410 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db900e0-0a74-41ca-b587-5d2623a492e3" containerName="oc" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.930417 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db900e0-0a74-41ca-b587-5d2623a492e3" containerName="oc" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.930761 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerName="gather" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.930806 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db900e0-0a74-41ca-b587-5d2623a492e3" containerName="oc" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.930819 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ca0657-c0f1-4bbd-bc5d-2dcee73ae5ef" containerName="copy" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.939659 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:38 crc kubenswrapper[4792]: I0319 18:11:38.944085 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5w8d"] Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.027183 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-utilities\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.027219 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrtv6\" (UniqueName: \"kubernetes.io/projected/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-kube-api-access-qrtv6\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.027567 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-catalog-content\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.129546 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-utilities\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.129599 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrtv6\" (UniqueName: \"kubernetes.io/projected/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-kube-api-access-qrtv6\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.129948 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-catalog-content\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.130095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-utilities\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.130240 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-catalog-content\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.177191 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrtv6\" (UniqueName: \"kubernetes.io/projected/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-kube-api-access-qrtv6\") pod \"redhat-marketplace-p5w8d\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.310809 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:39 crc kubenswrapper[4792]: I0319 18:11:39.796410 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5w8d"] Mar 19 18:11:40 crc kubenswrapper[4792]: I0319 18:11:40.068412 4792 generic.go:334] "Generic (PLEG): container finished" podID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerID="f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1" exitCode=0 Mar 19 18:11:40 crc kubenswrapper[4792]: I0319 18:11:40.068458 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5w8d" event={"ID":"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787","Type":"ContainerDied","Data":"f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1"} Mar 19 18:11:40 crc kubenswrapper[4792]: I0319 18:11:40.068481 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5w8d" event={"ID":"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787","Type":"ContainerStarted","Data":"34a72bfcf11c7f5e018693c84547f185cba5b93db0c76fa58eba6e3e3b0b091d"} Mar 19 18:11:42 crc kubenswrapper[4792]: I0319 18:11:42.093964 4792 generic.go:334] "Generic (PLEG): container finished" podID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerID="7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9" exitCode=0 Mar 19 18:11:42 crc kubenswrapper[4792]: I0319 18:11:42.094083 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5w8d" event={"ID":"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787","Type":"ContainerDied","Data":"7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9"} Mar 19 18:11:43 crc kubenswrapper[4792]: I0319 18:11:43.113427 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5w8d" event={"ID":"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787","Type":"ContainerStarted","Data":"d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf"} Mar 19 18:11:43 crc kubenswrapper[4792]: I0319 18:11:43.139276 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p5w8d" podStartSLOduration=2.6947331610000003 podStartE2EDuration="5.139254169s" podCreationTimestamp="2026-03-19 18:11:38 +0000 UTC" firstStartedPulling="2026-03-19 18:11:40.078829175 +0000 UTC m=+5463.224886715" lastFinishedPulling="2026-03-19 18:11:42.523350183 +0000 UTC m=+5465.669407723" observedRunningTime="2026-03-19 18:11:43.130101149 +0000 UTC m=+5466.276158689" watchObservedRunningTime="2026-03-19 18:11:43.139254169 +0000 UTC m=+5466.285311709" Mar 19 18:11:48 crc kubenswrapper[4792]: I0319 18:11:48.822403 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nm2qb"] Mar 19 18:11:48 crc kubenswrapper[4792]: I0319 18:11:48.826598 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:48 crc kubenswrapper[4792]: I0319 18:11:48.835238 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nm2qb"] Mar 19 18:11:48 crc kubenswrapper[4792]: I0319 18:11:48.905383 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-utilities\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:48 crc kubenswrapper[4792]: I0319 18:11:48.905660 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts22b\" (UniqueName: \"kubernetes.io/projected/0016b751-1391-43a7-8bb7-ab61f6a13b6d-kube-api-access-ts22b\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:48 crc kubenswrapper[4792]: I0319 18:11:48.905771 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-catalog-content\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.007444 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-utilities\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.007492 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts22b\" (UniqueName: \"kubernetes.io/projected/0016b751-1391-43a7-8bb7-ab61f6a13b6d-kube-api-access-ts22b\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.007533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-catalog-content\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.008193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-catalog-content\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.008210 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-utilities\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.035166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts22b\" (UniqueName: \"kubernetes.io/projected/0016b751-1391-43a7-8bb7-ab61f6a13b6d-kube-api-access-ts22b\") pod \"redhat-operators-nm2qb\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.169791 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.312567 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.312877 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.410294 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:49 crc kubenswrapper[4792]: I0319 18:11:49.691936 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nm2qb"] Mar 19 18:11:50 crc kubenswrapper[4792]: I0319 18:11:50.218965 4792 generic.go:334] "Generic (PLEG): container finished" podID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerID="2da973a03cf9f0828cbd7ad99e6acfa87f31273a0326e9e456d03f8b353f566c" exitCode=0 Mar 19 18:11:50 crc kubenswrapper[4792]: I0319 18:11:50.219165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm2qb" event={"ID":"0016b751-1391-43a7-8bb7-ab61f6a13b6d","Type":"ContainerDied","Data":"2da973a03cf9f0828cbd7ad99e6acfa87f31273a0326e9e456d03f8b353f566c"} Mar 19 18:11:50 crc kubenswrapper[4792]: I0319 18:11:50.219328 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm2qb" event={"ID":"0016b751-1391-43a7-8bb7-ab61f6a13b6d","Type":"ContainerStarted","Data":"b6b570a0e2e0acc0e8832999dcadef72904d1a415afb2d7f51897824a26867b4"} Mar 19 18:11:50 crc kubenswrapper[4792]: I0319 18:11:50.298081 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:51 crc kubenswrapper[4792]: I0319 18:11:51.236501 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm2qb" event={"ID":"0016b751-1391-43a7-8bb7-ab61f6a13b6d","Type":"ContainerStarted","Data":"917b4b257eb1acf222044128b1b977d8ba51e2b173f157f34f1d21c3a3a46048"} Mar 19 18:11:51 crc kubenswrapper[4792]: I0319 18:11:51.784950 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5w8d"] Mar 19 18:11:52 crc kubenswrapper[4792]: I0319 18:11:52.246562 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p5w8d" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerName="registry-server" containerID="cri-o://d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf" gracePeriod=2 Mar 19 18:11:52 crc kubenswrapper[4792]: I0319 18:11:52.970176 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.076381 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-catalog-content\") pod \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.076506 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrtv6\" (UniqueName: \"kubernetes.io/projected/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-kube-api-access-qrtv6\") pod \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.076546 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-utilities\") pod \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\" (UID: \"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787\") " Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.077129 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-utilities" (OuterVolumeSpecName: "utilities") pod "fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" (UID: "fc5cfa10-57ec-490e-87cd-fb9b2eb2b787"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.081811 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-kube-api-access-qrtv6" (OuterVolumeSpecName: "kube-api-access-qrtv6") pod "fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" (UID: "fc5cfa10-57ec-490e-87cd-fb9b2eb2b787"). InnerVolumeSpecName "kube-api-access-qrtv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.103769 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" (UID: "fc5cfa10-57ec-490e-87cd-fb9b2eb2b787"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.179219 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.179259 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrtv6\" (UniqueName: \"kubernetes.io/projected/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-kube-api-access-qrtv6\") on node \"crc\" DevicePath \"\"" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.179274 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.274039 4792 generic.go:334] "Generic (PLEG): container finished" podID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerID="d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf" exitCode=0 Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.274079 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5w8d" event={"ID":"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787","Type":"ContainerDied","Data":"d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf"} Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.274103 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5w8d" event={"ID":"fc5cfa10-57ec-490e-87cd-fb9b2eb2b787","Type":"ContainerDied","Data":"34a72bfcf11c7f5e018693c84547f185cba5b93db0c76fa58eba6e3e3b0b091d"} Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.274122 4792 scope.go:117] "RemoveContainer" containerID="d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.274244 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5w8d" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.321706 4792 scope.go:117] "RemoveContainer" containerID="7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.329221 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5w8d"] Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.340303 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5w8d"] Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.342932 4792 scope.go:117] "RemoveContainer" containerID="f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.402617 4792 scope.go:117] "RemoveContainer" containerID="d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf" Mar 19 18:11:53 crc kubenswrapper[4792]: E0319 18:11:53.403431 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf\": container with ID starting with d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf not found: ID does not exist" containerID="d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.403523 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf"} err="failed to get container status \"d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf\": rpc error: code = NotFound desc = could not find container \"d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf\": container with ID starting with d456a5eb767ffda1dbe5b9ec6999a2c30a76c1a1c6181b74539e4d0449f63bcf not found: ID does not exist" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.403551 4792 scope.go:117] "RemoveContainer" containerID="7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9" Mar 19 18:11:53 crc kubenswrapper[4792]: E0319 18:11:53.404232 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9\": container with ID starting with 7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9 not found: ID does not exist" containerID="7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.404256 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9"} err="failed to get container status \"7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9\": rpc error: code = NotFound desc = could not find container \"7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9\": container with ID starting with 7018a5e0a5a8374654ad68424ce1d4e8e4d413fe724d7c659c2b56f3ec055bf9 not found: ID does not exist" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.404272 4792 scope.go:117] "RemoveContainer" containerID="f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1" Mar 19 18:11:53 crc kubenswrapper[4792]: E0319 18:11:53.404651 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1\": container with ID starting with f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1 not found: ID does not exist" containerID="f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.404672 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1"} err="failed to get container status \"f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1\": rpc error: code = NotFound desc = could not find container \"f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1\": container with ID starting with f5c50dd59cfb48940b6d7eca8f2e0db8736210e762f8129b174f5e842fde6ef1 not found: ID does not exist" Mar 19 18:11:53 crc kubenswrapper[4792]: I0319 18:11:53.754675 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" path="/var/lib/kubelet/pods/fc5cfa10-57ec-490e-87cd-fb9b2eb2b787/volumes" Mar 19 18:11:57 crc kubenswrapper[4792]: I0319 18:11:57.320859 4792 generic.go:334] "Generic (PLEG): container finished" podID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerID="917b4b257eb1acf222044128b1b977d8ba51e2b173f157f34f1d21c3a3a46048" exitCode=0 Mar 19 18:11:57 crc kubenswrapper[4792]: I0319 18:11:57.320931 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm2qb" event={"ID":"0016b751-1391-43a7-8bb7-ab61f6a13b6d","Type":"ContainerDied","Data":"917b4b257eb1acf222044128b1b977d8ba51e2b173f157f34f1d21c3a3a46048"} Mar 19 18:11:58 crc kubenswrapper[4792]: I0319 18:11:58.334605 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm2qb" event={"ID":"0016b751-1391-43a7-8bb7-ab61f6a13b6d","Type":"ContainerStarted","Data":"594497f262681c627fa83b342d21447b4b8019c9998fa20b081426466815bdf4"} Mar 19 18:11:58 crc kubenswrapper[4792]: I0319 18:11:58.369366 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nm2qb" podStartSLOduration=2.637962334 podStartE2EDuration="10.369345197s" podCreationTimestamp="2026-03-19 18:11:48 +0000 UTC" firstStartedPulling="2026-03-19 18:11:50.224167722 +0000 UTC m=+5473.370225252" lastFinishedPulling="2026-03-19 18:11:57.955550575 +0000 UTC m=+5481.101608115" observedRunningTime="2026-03-19 18:11:58.351591451 +0000 UTC m=+5481.497648991" watchObservedRunningTime="2026-03-19 18:11:58.369345197 +0000 UTC m=+5481.515402737" Mar 19 18:11:59 crc kubenswrapper[4792]: I0319 18:11:59.170645 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:11:59 crc kubenswrapper[4792]: I0319 18:11:59.170695 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.149613 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565732-7x2p7"] Mar 19 18:12:00 crc kubenswrapper[4792]: E0319 18:12:00.151050 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerName="extract-utilities" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.151069 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerName="extract-utilities" Mar 19 18:12:00 crc kubenswrapper[4792]: E0319 18:12:00.151101 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerName="extract-content" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.151111 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerName="extract-content" Mar 19 18:12:00 crc kubenswrapper[4792]: E0319 18:12:00.151138 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerName="registry-server" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.151147 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerName="registry-server" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.152295 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5cfa10-57ec-490e-87cd-fb9b2eb2b787" containerName="registry-server" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.153317 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565732-7x2p7" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.156433 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.156731 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.157055 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.168305 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565732-7x2p7"] Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.266797 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nm2qb" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="registry-server" probeResult="failure" output=< Mar 19 18:12:00 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:12:00 crc kubenswrapper[4792]: > Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.269386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrmt\" (UniqueName: \"kubernetes.io/projected/2da496f7-636f-4ebb-8892-901fbb220bbd-kube-api-access-8lrmt\") pod \"auto-csr-approver-29565732-7x2p7\" (UID: \"2da496f7-636f-4ebb-8892-901fbb220bbd\") " pod="openshift-infra/auto-csr-approver-29565732-7x2p7" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.371648 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lrmt\" (UniqueName: \"kubernetes.io/projected/2da496f7-636f-4ebb-8892-901fbb220bbd-kube-api-access-8lrmt\") pod \"auto-csr-approver-29565732-7x2p7\" (UID: \"2da496f7-636f-4ebb-8892-901fbb220bbd\") " pod="openshift-infra/auto-csr-approver-29565732-7x2p7" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.399475 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lrmt\" (UniqueName: \"kubernetes.io/projected/2da496f7-636f-4ebb-8892-901fbb220bbd-kube-api-access-8lrmt\") pod \"auto-csr-approver-29565732-7x2p7\" (UID: \"2da496f7-636f-4ebb-8892-901fbb220bbd\") " pod="openshift-infra/auto-csr-approver-29565732-7x2p7" Mar 19 18:12:00 crc kubenswrapper[4792]: I0319 18:12:00.475492 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565732-7x2p7" Mar 19 18:12:01 crc kubenswrapper[4792]: W0319 18:12:01.263397 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2da496f7_636f_4ebb_8892_901fbb220bbd.slice/crio-953743ed63edf2897635eb4b93b7f4f2f74bf4a14694778ca0d32e36a2b3483b WatchSource:0}: Error finding container 953743ed63edf2897635eb4b93b7f4f2f74bf4a14694778ca0d32e36a2b3483b: Status 404 returned error can't find the container with id 953743ed63edf2897635eb4b93b7f4f2f74bf4a14694778ca0d32e36a2b3483b Mar 19 18:12:01 crc kubenswrapper[4792]: I0319 18:12:01.268634 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565732-7x2p7"] Mar 19 18:12:01 crc kubenswrapper[4792]: I0319 18:12:01.381925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565732-7x2p7" event={"ID":"2da496f7-636f-4ebb-8892-901fbb220bbd","Type":"ContainerStarted","Data":"953743ed63edf2897635eb4b93b7f4f2f74bf4a14694778ca0d32e36a2b3483b"} Mar 19 18:12:03 crc kubenswrapper[4792]: I0319 18:12:03.412797 4792 generic.go:334] "Generic (PLEG): container finished" podID="2da496f7-636f-4ebb-8892-901fbb220bbd" containerID="230d9bcf7f6e1f03969a793809ea146c86e8d38bd984f5acf1cbfae383b2d0ff" exitCode=0 Mar 19 18:12:03 crc kubenswrapper[4792]: I0319 18:12:03.412873 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565732-7x2p7" event={"ID":"2da496f7-636f-4ebb-8892-901fbb220bbd","Type":"ContainerDied","Data":"230d9bcf7f6e1f03969a793809ea146c86e8d38bd984f5acf1cbfae383b2d0ff"} Mar 19 18:12:05 crc kubenswrapper[4792]: I0319 18:12:05.153815 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565732-7x2p7" Mar 19 18:12:05 crc kubenswrapper[4792]: I0319 18:12:05.274533 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lrmt\" (UniqueName: \"kubernetes.io/projected/2da496f7-636f-4ebb-8892-901fbb220bbd-kube-api-access-8lrmt\") pod \"2da496f7-636f-4ebb-8892-901fbb220bbd\" (UID: \"2da496f7-636f-4ebb-8892-901fbb220bbd\") " Mar 19 18:12:05 crc kubenswrapper[4792]: I0319 18:12:05.281053 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da496f7-636f-4ebb-8892-901fbb220bbd-kube-api-access-8lrmt" (OuterVolumeSpecName: "kube-api-access-8lrmt") pod "2da496f7-636f-4ebb-8892-901fbb220bbd" (UID: "2da496f7-636f-4ebb-8892-901fbb220bbd"). InnerVolumeSpecName "kube-api-access-8lrmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:12:05 crc kubenswrapper[4792]: I0319 18:12:05.379532 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lrmt\" (UniqueName: \"kubernetes.io/projected/2da496f7-636f-4ebb-8892-901fbb220bbd-kube-api-access-8lrmt\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:05 crc kubenswrapper[4792]: I0319 18:12:05.462774 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565732-7x2p7" event={"ID":"2da496f7-636f-4ebb-8892-901fbb220bbd","Type":"ContainerDied","Data":"953743ed63edf2897635eb4b93b7f4f2f74bf4a14694778ca0d32e36a2b3483b"} Mar 19 18:12:05 crc kubenswrapper[4792]: I0319 18:12:05.462817 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="953743ed63edf2897635eb4b93b7f4f2f74bf4a14694778ca0d32e36a2b3483b" Mar 19 18:12:05 crc kubenswrapper[4792]: I0319 18:12:05.462902 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565732-7x2p7" Mar 19 18:12:06 crc kubenswrapper[4792]: I0319 18:12:06.232900 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565726-ts6td"] Mar 19 18:12:06 crc kubenswrapper[4792]: I0319 18:12:06.242508 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565726-ts6td"] Mar 19 18:12:07 crc kubenswrapper[4792]: I0319 18:12:07.756362 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97b0dad-b721-4875-adcf-00c1de0a73c7" path="/var/lib/kubelet/pods/e97b0dad-b721-4875-adcf-00c1de0a73c7/volumes" Mar 19 18:12:10 crc kubenswrapper[4792]: I0319 18:12:10.238708 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nm2qb" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="registry-server" probeResult="failure" output=< Mar 19 18:12:10 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:12:10 crc kubenswrapper[4792]: > Mar 19 18:12:12 crc kubenswrapper[4792]: I0319 18:12:12.159828 4792 scope.go:117] "RemoveContainer" containerID="121f58945364e31f15bc796e99de937f906894e025da9a25af86563b11e8218c" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.164627 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n8psl"] Mar 19 18:12:16 crc kubenswrapper[4792]: E0319 18:12:16.165781 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da496f7-636f-4ebb-8892-901fbb220bbd" containerName="oc" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.165794 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da496f7-636f-4ebb-8892-901fbb220bbd" containerName="oc" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.166082 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da496f7-636f-4ebb-8892-901fbb220bbd" containerName="oc" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.168662 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.178167 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n8psl"] Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.260313 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-utilities\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.260394 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wtp\" (UniqueName: \"kubernetes.io/projected/98a9d8ec-9f38-4f80-8757-5d6b587a404e-kube-api-access-r5wtp\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.260473 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-catalog-content\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.362635 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-utilities\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.362702 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wtp\" (UniqueName: \"kubernetes.io/projected/98a9d8ec-9f38-4f80-8757-5d6b587a404e-kube-api-access-r5wtp\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.362796 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-catalog-content\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.363401 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-utilities\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.363491 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-catalog-content\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.389785 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wtp\" (UniqueName: \"kubernetes.io/projected/98a9d8ec-9f38-4f80-8757-5d6b587a404e-kube-api-access-r5wtp\") pod \"community-operators-n8psl\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:16 crc kubenswrapper[4792]: I0319 18:12:16.499526 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:17 crc kubenswrapper[4792]: I0319 18:12:17.151669 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n8psl"] Mar 19 18:12:17 crc kubenswrapper[4792]: I0319 18:12:17.601985 4792 generic.go:334] "Generic (PLEG): container finished" podID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerID="6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed" exitCode=0 Mar 19 18:12:17 crc kubenswrapper[4792]: I0319 18:12:17.603552 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8psl" event={"ID":"98a9d8ec-9f38-4f80-8757-5d6b587a404e","Type":"ContainerDied","Data":"6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed"} Mar 19 18:12:17 crc kubenswrapper[4792]: I0319 18:12:17.603641 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8psl" event={"ID":"98a9d8ec-9f38-4f80-8757-5d6b587a404e","Type":"ContainerStarted","Data":"d83ab43f276deb89ac2cadf445d182352762f1466a6078a2461f74afbc8ec924"} Mar 19 18:12:18 crc kubenswrapper[4792]: I0319 18:12:18.616023 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8psl" event={"ID":"98a9d8ec-9f38-4f80-8757-5d6b587a404e","Type":"ContainerStarted","Data":"ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2"} Mar 19 18:12:20 crc kubenswrapper[4792]: I0319 18:12:20.259085 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nm2qb" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="registry-server" probeResult="failure" output=< Mar 19 18:12:20 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:12:20 crc kubenswrapper[4792]: > Mar 19 18:12:20 crc kubenswrapper[4792]: I0319 18:12:20.635240 4792 generic.go:334] "Generic (PLEG): container finished" podID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerID="ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2" exitCode=0 Mar 19 18:12:20 crc kubenswrapper[4792]: I0319 18:12:20.635284 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8psl" event={"ID":"98a9d8ec-9f38-4f80-8757-5d6b587a404e","Type":"ContainerDied","Data":"ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2"} Mar 19 18:12:21 crc kubenswrapper[4792]: I0319 18:12:21.647817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8psl" event={"ID":"98a9d8ec-9f38-4f80-8757-5d6b587a404e","Type":"ContainerStarted","Data":"390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea"} Mar 19 18:12:21 crc kubenswrapper[4792]: I0319 18:12:21.674988 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n8psl" podStartSLOduration=2.230331115 podStartE2EDuration="5.674970003s" podCreationTimestamp="2026-03-19 18:12:16 +0000 UTC" firstStartedPulling="2026-03-19 18:12:17.606421751 +0000 UTC m=+5500.752479291" lastFinishedPulling="2026-03-19 18:12:21.051060639 +0000 UTC m=+5504.197118179" observedRunningTime="2026-03-19 18:12:21.673897614 +0000 UTC m=+5504.819955154" watchObservedRunningTime="2026-03-19 18:12:21.674970003 +0000 UTC m=+5504.821027543" Mar 19 18:12:26 crc kubenswrapper[4792]: I0319 18:12:26.500520 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:26 crc kubenswrapper[4792]: I0319 18:12:26.501043 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:27 crc kubenswrapper[4792]: I0319 18:12:27.560521 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n8psl" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="registry-server" probeResult="failure" output=< Mar 19 18:12:27 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:12:27 crc kubenswrapper[4792]: > Mar 19 18:12:30 crc kubenswrapper[4792]: I0319 18:12:30.229436 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nm2qb" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="registry-server" probeResult="failure" output=< Mar 19 18:12:30 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Mar 19 18:12:30 crc kubenswrapper[4792]: > Mar 19 18:12:36 crc kubenswrapper[4792]: I0319 18:12:36.590262 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:36 crc kubenswrapper[4792]: I0319 18:12:36.646686 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:36 crc kubenswrapper[4792]: I0319 18:12:36.831677 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n8psl"] Mar 19 18:12:37 crc kubenswrapper[4792]: I0319 18:12:37.856367 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n8psl" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="registry-server" containerID="cri-o://390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea" gracePeriod=2 Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.551872 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.713279 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-catalog-content\") pod \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.714232 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5wtp\" (UniqueName: \"kubernetes.io/projected/98a9d8ec-9f38-4f80-8757-5d6b587a404e-kube-api-access-r5wtp\") pod \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.714569 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-utilities\") pod \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\" (UID: \"98a9d8ec-9f38-4f80-8757-5d6b587a404e\") " Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.715758 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-utilities" (OuterVolumeSpecName: "utilities") pod "98a9d8ec-9f38-4f80-8757-5d6b587a404e" (UID: "98a9d8ec-9f38-4f80-8757-5d6b587a404e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.716464 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.723545 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a9d8ec-9f38-4f80-8757-5d6b587a404e-kube-api-access-r5wtp" (OuterVolumeSpecName: "kube-api-access-r5wtp") pod "98a9d8ec-9f38-4f80-8757-5d6b587a404e" (UID: "98a9d8ec-9f38-4f80-8757-5d6b587a404e"). InnerVolumeSpecName "kube-api-access-r5wtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.776336 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98a9d8ec-9f38-4f80-8757-5d6b587a404e" (UID: "98a9d8ec-9f38-4f80-8757-5d6b587a404e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.818886 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a9d8ec-9f38-4f80-8757-5d6b587a404e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.819158 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5wtp\" (UniqueName: \"kubernetes.io/projected/98a9d8ec-9f38-4f80-8757-5d6b587a404e-kube-api-access-r5wtp\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.871042 4792 generic.go:334] "Generic (PLEG): container finished" podID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerID="390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea" exitCode=0 Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.871101 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8psl" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.871118 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8psl" event={"ID":"98a9d8ec-9f38-4f80-8757-5d6b587a404e","Type":"ContainerDied","Data":"390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea"} Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.871152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8psl" event={"ID":"98a9d8ec-9f38-4f80-8757-5d6b587a404e","Type":"ContainerDied","Data":"d83ab43f276deb89ac2cadf445d182352762f1466a6078a2461f74afbc8ec924"} Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.871256 4792 scope.go:117] "RemoveContainer" containerID="390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.907637 4792 scope.go:117] "RemoveContainer" containerID="ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2" Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.911639 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n8psl"] Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.922490 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n8psl"] Mar 19 18:12:38 crc kubenswrapper[4792]: I0319 18:12:38.931112 4792 scope.go:117] "RemoveContainer" containerID="6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.009093 4792 scope.go:117] "RemoveContainer" containerID="390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea" Mar 19 18:12:39 crc kubenswrapper[4792]: E0319 18:12:39.009612 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea\": container with ID starting with 390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea not found: ID does not exist" containerID="390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.009647 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea"} err="failed to get container status \"390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea\": rpc error: code = NotFound desc = could not find container \"390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea\": container with ID starting with 390f66cb1cf71aea623bd419b3935ae3994cb8ecf546863811c340d52f5ff7ea not found: ID does not exist" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.009671 4792 scope.go:117] "RemoveContainer" containerID="ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2" Mar 19 18:12:39 crc kubenswrapper[4792]: E0319 18:12:39.010151 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2\": container with ID starting with ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2 not found: ID does not exist" containerID="ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.010176 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2"} err="failed to get container status \"ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2\": rpc error: code = NotFound desc = could not find container \"ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2\": container with ID starting with ffc9151dabba16323b6909f3e33a027757aa28728185c2d1a628bc38348efee2 not found: ID does not exist" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.010196 4792 scope.go:117] "RemoveContainer" containerID="6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed" Mar 19 18:12:39 crc kubenswrapper[4792]: E0319 18:12:39.010444 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed\": container with ID starting with 6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed not found: ID does not exist" containerID="6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.010465 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed"} err="failed to get container status \"6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed\": rpc error: code = NotFound desc = could not find container \"6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed\": container with ID starting with 6622b475f40df6079fdefdaead2a6e0b42cf5405cc6ad43e3df68ab2e9bc36ed not found: ID does not exist" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.227218 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.281238 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:12:39 crc kubenswrapper[4792]: I0319 18:12:39.757315 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" path="/var/lib/kubelet/pods/98a9d8ec-9f38-4f80-8757-5d6b587a404e/volumes" Mar 19 18:12:41 crc kubenswrapper[4792]: I0319 18:12:41.640364 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nm2qb"] Mar 19 18:12:41 crc kubenswrapper[4792]: I0319 18:12:41.641048 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nm2qb" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="registry-server" containerID="cri-o://594497f262681c627fa83b342d21447b4b8019c9998fa20b081426466815bdf4" gracePeriod=2 Mar 19 18:12:41 crc kubenswrapper[4792]: I0319 18:12:41.909659 4792 generic.go:334] "Generic (PLEG): container finished" podID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerID="594497f262681c627fa83b342d21447b4b8019c9998fa20b081426466815bdf4" exitCode=0 Mar 19 18:12:41 crc kubenswrapper[4792]: I0319 18:12:41.909733 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm2qb" event={"ID":"0016b751-1391-43a7-8bb7-ab61f6a13b6d","Type":"ContainerDied","Data":"594497f262681c627fa83b342d21447b4b8019c9998fa20b081426466815bdf4"} Mar 19 18:12:42 crc kubenswrapper[4792]: I0319 18:12:42.925963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm2qb" event={"ID":"0016b751-1391-43a7-8bb7-ab61f6a13b6d","Type":"ContainerDied","Data":"b6b570a0e2e0acc0e8832999dcadef72904d1a415afb2d7f51897824a26867b4"} Mar 19 18:12:42 crc kubenswrapper[4792]: I0319 18:12:42.926443 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b570a0e2e0acc0e8832999dcadef72904d1a415afb2d7f51897824a26867b4" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.106569 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hj8lb"] Mar 19 18:12:43 crc kubenswrapper[4792]: E0319 18:12:43.107290 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="registry-server" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.107316 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="registry-server" Mar 19 18:12:43 crc kubenswrapper[4792]: E0319 18:12:43.107343 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="extract-content" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.107351 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="extract-content" Mar 19 18:12:43 crc kubenswrapper[4792]: E0319 18:12:43.107367 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="extract-utilities" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.107374 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="extract-utilities" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.107667 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a9d8ec-9f38-4f80-8757-5d6b587a404e" containerName="registry-server" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.113094 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.118640 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hj8lb"] Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.251324 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrpfz\" (UniqueName: \"kubernetes.io/projected/31735a8d-32d4-4d46-b9f6-224f47cb9213-kube-api-access-xrpfz\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.251401 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-utilities\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.251447 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-catalog-content\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.279944 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.354105 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts22b\" (UniqueName: \"kubernetes.io/projected/0016b751-1391-43a7-8bb7-ab61f6a13b6d-kube-api-access-ts22b\") pod \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.354151 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-catalog-content\") pod \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.354314 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-utilities\") pod \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\" (UID: \"0016b751-1391-43a7-8bb7-ab61f6a13b6d\") " Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.354995 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrpfz\" (UniqueName: \"kubernetes.io/projected/31735a8d-32d4-4d46-b9f6-224f47cb9213-kube-api-access-xrpfz\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.355447 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-utilities\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.355953 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-utilities" (OuterVolumeSpecName: "utilities") pod "0016b751-1391-43a7-8bb7-ab61f6a13b6d" (UID: "0016b751-1391-43a7-8bb7-ab61f6a13b6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.356051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-utilities\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.356117 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-catalog-content\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.356523 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-catalog-content\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.356718 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.368287 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0016b751-1391-43a7-8bb7-ab61f6a13b6d-kube-api-access-ts22b" (OuterVolumeSpecName: "kube-api-access-ts22b") pod "0016b751-1391-43a7-8bb7-ab61f6a13b6d" (UID: "0016b751-1391-43a7-8bb7-ab61f6a13b6d"). InnerVolumeSpecName "kube-api-access-ts22b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.377359 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrpfz\" (UniqueName: \"kubernetes.io/projected/31735a8d-32d4-4d46-b9f6-224f47cb9213-kube-api-access-xrpfz\") pod \"certified-operators-hj8lb\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.458668 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts22b\" (UniqueName: \"kubernetes.io/projected/0016b751-1391-43a7-8bb7-ab61f6a13b6d-kube-api-access-ts22b\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.522697 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0016b751-1391-43a7-8bb7-ab61f6a13b6d" (UID: "0016b751-1391-43a7-8bb7-ab61f6a13b6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.561491 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016b751-1391-43a7-8bb7-ab61f6a13b6d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.593366 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.937045 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm2qb" Mar 19 18:12:43 crc kubenswrapper[4792]: I0319 18:12:43.979815 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nm2qb"] Mar 19 18:12:44 crc kubenswrapper[4792]: I0319 18:12:44.018912 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nm2qb"] Mar 19 18:12:44 crc kubenswrapper[4792]: I0319 18:12:44.112266 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hj8lb"] Mar 19 18:12:44 crc kubenswrapper[4792]: W0319 18:12:44.113014 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31735a8d_32d4_4d46_b9f6_224f47cb9213.slice/crio-245bdd48c0ad179872ef9d355708e39200533dc0fa089f44fd8a715eb5faa397 WatchSource:0}: Error finding container 245bdd48c0ad179872ef9d355708e39200533dc0fa089f44fd8a715eb5faa397: Status 404 returned error can't find the container with id 245bdd48c0ad179872ef9d355708e39200533dc0fa089f44fd8a715eb5faa397 Mar 19 18:12:44 crc kubenswrapper[4792]: I0319 18:12:44.955499 4792 generic.go:334] "Generic (PLEG): container finished" podID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerID="ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab" exitCode=0 Mar 19 18:12:44 crc kubenswrapper[4792]: I0319 18:12:44.955547 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj8lb" event={"ID":"31735a8d-32d4-4d46-b9f6-224f47cb9213","Type":"ContainerDied","Data":"ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab"} Mar 19 18:12:44 crc kubenswrapper[4792]: I0319 18:12:44.955575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj8lb" event={"ID":"31735a8d-32d4-4d46-b9f6-224f47cb9213","Type":"ContainerStarted","Data":"245bdd48c0ad179872ef9d355708e39200533dc0fa089f44fd8a715eb5faa397"} Mar 19 18:12:45 crc kubenswrapper[4792]: I0319 18:12:45.757604 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" path="/var/lib/kubelet/pods/0016b751-1391-43a7-8bb7-ab61f6a13b6d/volumes" Mar 19 18:12:45 crc kubenswrapper[4792]: I0319 18:12:45.966967 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj8lb" event={"ID":"31735a8d-32d4-4d46-b9f6-224f47cb9213","Type":"ContainerStarted","Data":"22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3"} Mar 19 18:12:47 crc kubenswrapper[4792]: I0319 18:12:47.995645 4792 generic.go:334] "Generic (PLEG): container finished" podID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerID="22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3" exitCode=0 Mar 19 18:12:47 crc kubenswrapper[4792]: I0319 18:12:47.995674 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj8lb" event={"ID":"31735a8d-32d4-4d46-b9f6-224f47cb9213","Type":"ContainerDied","Data":"22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3"} Mar 19 18:12:49 crc kubenswrapper[4792]: I0319 18:12:49.009512 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj8lb" event={"ID":"31735a8d-32d4-4d46-b9f6-224f47cb9213","Type":"ContainerStarted","Data":"f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c"} Mar 19 18:12:49 crc kubenswrapper[4792]: I0319 18:12:49.029189 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hj8lb" podStartSLOduration=2.425315407 podStartE2EDuration="6.029167797s" podCreationTimestamp="2026-03-19 18:12:43 +0000 UTC" firstStartedPulling="2026-03-19 18:12:44.957205701 +0000 UTC m=+5528.103263241" lastFinishedPulling="2026-03-19 18:12:48.561058091 +0000 UTC m=+5531.707115631" observedRunningTime="2026-03-19 18:12:49.027608304 +0000 UTC m=+5532.173665864" watchObservedRunningTime="2026-03-19 18:12:49.029167797 +0000 UTC m=+5532.175225357" Mar 19 18:12:50 crc kubenswrapper[4792]: I0319 18:12:50.231271 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:12:50 crc kubenswrapper[4792]: I0319 18:12:50.231740 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:12:53 crc kubenswrapper[4792]: I0319 18:12:53.593660 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:53 crc kubenswrapper[4792]: I0319 18:12:53.594363 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:54 crc kubenswrapper[4792]: I0319 18:12:54.150201 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:54 crc kubenswrapper[4792]: I0319 18:12:54.233920 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:54 crc kubenswrapper[4792]: I0319 18:12:54.628324 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hj8lb"] Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.096353 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hj8lb" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerName="registry-server" containerID="cri-o://f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c" gracePeriod=2 Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.628025 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.714350 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-utilities\") pod \"31735a8d-32d4-4d46-b9f6-224f47cb9213\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.714484 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-catalog-content\") pod \"31735a8d-32d4-4d46-b9f6-224f47cb9213\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.714566 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrpfz\" (UniqueName: \"kubernetes.io/projected/31735a8d-32d4-4d46-b9f6-224f47cb9213-kube-api-access-xrpfz\") pod \"31735a8d-32d4-4d46-b9f6-224f47cb9213\" (UID: \"31735a8d-32d4-4d46-b9f6-224f47cb9213\") " Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.715564 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-utilities" (OuterVolumeSpecName: "utilities") pod "31735a8d-32d4-4d46-b9f6-224f47cb9213" (UID: "31735a8d-32d4-4d46-b9f6-224f47cb9213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.726766 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31735a8d-32d4-4d46-b9f6-224f47cb9213-kube-api-access-xrpfz" (OuterVolumeSpecName: "kube-api-access-xrpfz") pod "31735a8d-32d4-4d46-b9f6-224f47cb9213" (UID: "31735a8d-32d4-4d46-b9f6-224f47cb9213"). InnerVolumeSpecName "kube-api-access-xrpfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.769681 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31735a8d-32d4-4d46-b9f6-224f47cb9213" (UID: "31735a8d-32d4-4d46-b9f6-224f47cb9213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.818618 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrpfz\" (UniqueName: \"kubernetes.io/projected/31735a8d-32d4-4d46-b9f6-224f47cb9213-kube-api-access-xrpfz\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.818668 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-utilities\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:56 crc kubenswrapper[4792]: I0319 18:12:56.818682 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31735a8d-32d4-4d46-b9f6-224f47cb9213-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.110788 4792 generic.go:334] "Generic (PLEG): container finished" podID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerID="f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c" exitCode=0 Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.110831 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj8lb" event={"ID":"31735a8d-32d4-4d46-b9f6-224f47cb9213","Type":"ContainerDied","Data":"f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c"} Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.110882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj8lb" event={"ID":"31735a8d-32d4-4d46-b9f6-224f47cb9213","Type":"ContainerDied","Data":"245bdd48c0ad179872ef9d355708e39200533dc0fa089f44fd8a715eb5faa397"} Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.110834 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj8lb" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.110904 4792 scope.go:117] "RemoveContainer" containerID="f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.141335 4792 scope.go:117] "RemoveContainer" containerID="22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.149406 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hj8lb"] Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.161259 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hj8lb"] Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.193761 4792 scope.go:117] "RemoveContainer" containerID="ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.230873 4792 scope.go:117] "RemoveContainer" containerID="f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c" Mar 19 18:12:57 crc kubenswrapper[4792]: E0319 18:12:57.231385 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c\": container with ID starting with f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c not found: ID does not exist" containerID="f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.231433 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c"} err="failed to get container status \"f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c\": rpc error: code = NotFound desc = could not find container \"f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c\": container with ID starting with f28d0cc5a28731f3d7b19c71344b30dd2cf44b941950b3ccf5b653f93b607d3c not found: ID does not exist" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.231461 4792 scope.go:117] "RemoveContainer" containerID="22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3" Mar 19 18:12:57 crc kubenswrapper[4792]: E0319 18:12:57.231786 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3\": container with ID starting with 22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3 not found: ID does not exist" containerID="22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.232146 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3"} err="failed to get container status \"22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3\": rpc error: code = NotFound desc = could not find container \"22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3\": container with ID starting with 22871866e689e751fa637bb829046855a7fb19bdb97d85a0e1c58dd44bafc8e3 not found: ID does not exist" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.232172 4792 scope.go:117] "RemoveContainer" containerID="ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab" Mar 19 18:12:57 crc kubenswrapper[4792]: E0319 18:12:57.232501 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab\": container with ID starting with ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab not found: ID does not exist" containerID="ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.232537 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab"} err="failed to get container status \"ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab\": rpc error: code = NotFound desc = could not find container \"ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab\": container with ID starting with ccec2cb1bc6e36513a4d0c035e0dda0dfab3a406285665b848e76ceb535359ab not found: ID does not exist" Mar 19 18:12:57 crc kubenswrapper[4792]: I0319 18:12:57.759454 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" path="/var/lib/kubelet/pods/31735a8d-32d4-4d46-b9f6-224f47cb9213/volumes" Mar 19 18:13:20 crc kubenswrapper[4792]: I0319 18:13:20.230644 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:13:20 crc kubenswrapper[4792]: I0319 18:13:20.231198 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.230659 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.232079 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.232243 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szhln" Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.233457 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57d15e901fe2337fd8a05868cdd2220643fc733c1e74b8172874bbeed3be0673"} pod="openshift-machine-config-operator/machine-config-daemon-szhln" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.233536 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" containerID="cri-o://57d15e901fe2337fd8a05868cdd2220643fc733c1e74b8172874bbeed3be0673" gracePeriod=600 Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.919716 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerID="57d15e901fe2337fd8a05868cdd2220643fc733c1e74b8172874bbeed3be0673" exitCode=0 Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.919944 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerDied","Data":"57d15e901fe2337fd8a05868cdd2220643fc733c1e74b8172874bbeed3be0673"} Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.920257 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szhln" event={"ID":"a9e72e9a-50c3-41db-8657-7ae683c7c13a","Type":"ContainerStarted","Data":"b67f6de2517e1fb153136d2ba296d952fe020f954310e3e91d957b02e0ee98f5"} Mar 19 18:13:50 crc kubenswrapper[4792]: I0319 18:13:50.920293 4792 scope.go:117] "RemoveContainer" containerID="a006ede42ade388b7cbad0119a7579708b8ec5d7d5c0f09f679dca8bd550c0db" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.150980 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565734-5pk67"] Mar 19 18:14:00 crc kubenswrapper[4792]: E0319 18:14:00.152192 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerName="extract-utilities" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.152211 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerName="extract-utilities" Mar 19 18:14:00 crc kubenswrapper[4792]: E0319 18:14:00.152257 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerName="registry-server" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.152265 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerName="registry-server" Mar 19 18:14:00 crc kubenswrapper[4792]: E0319 18:14:00.152324 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="extract-utilities" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.152334 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="extract-utilities" Mar 19 18:14:00 crc kubenswrapper[4792]: E0319 18:14:00.152359 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="extract-content" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.152367 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="extract-content" Mar 19 18:14:00 crc kubenswrapper[4792]: E0319 18:14:00.152391 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="registry-server" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.152398 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="registry-server" Mar 19 18:14:00 crc kubenswrapper[4792]: E0319 18:14:00.152413 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerName="extract-content" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.152420 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerName="extract-content" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.152707 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0016b751-1391-43a7-8bb7-ab61f6a13b6d" containerName="registry-server" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.152728 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="31735a8d-32d4-4d46-b9f6-224f47cb9213" containerName="registry-server" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.153810 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565734-5pk67" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.156647 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.156949 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.157078 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.162855 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565734-5pk67"] Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.292272 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2zv\" (UniqueName: \"kubernetes.io/projected/051f1bb2-7003-40a8-94f1-dd72ff02b509-kube-api-access-zj2zv\") pod \"auto-csr-approver-29565734-5pk67\" (UID: \"051f1bb2-7003-40a8-94f1-dd72ff02b509\") " pod="openshift-infra/auto-csr-approver-29565734-5pk67" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.395625 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj2zv\" (UniqueName: \"kubernetes.io/projected/051f1bb2-7003-40a8-94f1-dd72ff02b509-kube-api-access-zj2zv\") pod \"auto-csr-approver-29565734-5pk67\" (UID: \"051f1bb2-7003-40a8-94f1-dd72ff02b509\") " pod="openshift-infra/auto-csr-approver-29565734-5pk67" Mar 19 18:14:00 crc kubenswrapper[4792]: I0319 18:14:00.828193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj2zv\" (UniqueName: \"kubernetes.io/projected/051f1bb2-7003-40a8-94f1-dd72ff02b509-kube-api-access-zj2zv\") pod \"auto-csr-approver-29565734-5pk67\" (UID: \"051f1bb2-7003-40a8-94f1-dd72ff02b509\") " pod="openshift-infra/auto-csr-approver-29565734-5pk67" Mar 19 18:14:01 crc kubenswrapper[4792]: I0319 18:14:01.078948 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565734-5pk67" Mar 19 18:14:01 crc kubenswrapper[4792]: I0319 18:14:01.569578 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565734-5pk67"] Mar 19 18:14:02 crc kubenswrapper[4792]: I0319 18:14:02.056176 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565734-5pk67" event={"ID":"051f1bb2-7003-40a8-94f1-dd72ff02b509","Type":"ContainerStarted","Data":"769280f7f580c783ad31fe879aeb3460c7bdb402bf98104fd4801ea6d13fc38b"} Mar 19 18:14:04 crc kubenswrapper[4792]: I0319 18:14:04.084621 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565734-5pk67" event={"ID":"051f1bb2-7003-40a8-94f1-dd72ff02b509","Type":"ContainerStarted","Data":"27236888ed94798d5572da02a51f2c6f5acb359520fed410806c60dada895efd"} Mar 19 18:14:04 crc kubenswrapper[4792]: I0319 18:14:04.105408 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565734-5pk67" podStartSLOduration=2.879707588 podStartE2EDuration="4.105386791s" podCreationTimestamp="2026-03-19 18:14:00 +0000 UTC" firstStartedPulling="2026-03-19 18:14:01.576491905 +0000 UTC m=+5604.722549445" lastFinishedPulling="2026-03-19 18:14:02.802171108 +0000 UTC m=+5605.948228648" observedRunningTime="2026-03-19 18:14:04.101215077 +0000 UTC m=+5607.247272637" watchObservedRunningTime="2026-03-19 18:14:04.105386791 +0000 UTC m=+5607.251444331" Mar 19 18:14:06 crc kubenswrapper[4792]: I0319 18:14:06.110093 4792 generic.go:334] "Generic (PLEG): container finished" podID="051f1bb2-7003-40a8-94f1-dd72ff02b509" containerID="27236888ed94798d5572da02a51f2c6f5acb359520fed410806c60dada895efd" exitCode=0 Mar 19 18:14:06 crc kubenswrapper[4792]: I0319 18:14:06.110193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565734-5pk67" event={"ID":"051f1bb2-7003-40a8-94f1-dd72ff02b509","Type":"ContainerDied","Data":"27236888ed94798d5572da02a51f2c6f5acb359520fed410806c60dada895efd"} Mar 19 18:14:07 crc kubenswrapper[4792]: I0319 18:14:07.593979 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565734-5pk67" Mar 19 18:14:07 crc kubenswrapper[4792]: I0319 18:14:07.707384 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj2zv\" (UniqueName: \"kubernetes.io/projected/051f1bb2-7003-40a8-94f1-dd72ff02b509-kube-api-access-zj2zv\") pod \"051f1bb2-7003-40a8-94f1-dd72ff02b509\" (UID: \"051f1bb2-7003-40a8-94f1-dd72ff02b509\") " Mar 19 18:14:07 crc kubenswrapper[4792]: I0319 18:14:07.714123 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051f1bb2-7003-40a8-94f1-dd72ff02b509-kube-api-access-zj2zv" (OuterVolumeSpecName: "kube-api-access-zj2zv") pod "051f1bb2-7003-40a8-94f1-dd72ff02b509" (UID: "051f1bb2-7003-40a8-94f1-dd72ff02b509"). InnerVolumeSpecName "kube-api-access-zj2zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:14:07 crc kubenswrapper[4792]: I0319 18:14:07.810537 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj2zv\" (UniqueName: \"kubernetes.io/projected/051f1bb2-7003-40a8-94f1-dd72ff02b509-kube-api-access-zj2zv\") on node \"crc\" DevicePath \"\"" Mar 19 18:14:08 crc kubenswrapper[4792]: I0319 18:14:08.145591 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565734-5pk67" event={"ID":"051f1bb2-7003-40a8-94f1-dd72ff02b509","Type":"ContainerDied","Data":"769280f7f580c783ad31fe879aeb3460c7bdb402bf98104fd4801ea6d13fc38b"} Mar 19 18:14:08 crc kubenswrapper[4792]: I0319 18:14:08.145990 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="769280f7f580c783ad31fe879aeb3460c7bdb402bf98104fd4801ea6d13fc38b" Mar 19 18:14:08 crc kubenswrapper[4792]: I0319 18:14:08.145715 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565734-5pk67" Mar 19 18:14:08 crc kubenswrapper[4792]: I0319 18:14:08.208446 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565728-srsll"] Mar 19 18:14:08 crc kubenswrapper[4792]: I0319 18:14:08.224988 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565728-srsll"] Mar 19 18:14:09 crc kubenswrapper[4792]: I0319 18:14:09.756478 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb06173e-b912-462b-813a-9822e6fbd709" path="/var/lib/kubelet/pods/fb06173e-b912-462b-813a-9822e6fbd709/volumes" Mar 19 18:14:12 crc kubenswrapper[4792]: I0319 18:14:12.385662 4792 scope.go:117] "RemoveContainer" containerID="ac1004dd47948892e2346dc119528fc562aa76d7c31b6a833d9cdf2007bfefd3" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.154335 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6"] Mar 19 18:15:00 crc kubenswrapper[4792]: E0319 18:15:00.155396 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051f1bb2-7003-40a8-94f1-dd72ff02b509" containerName="oc" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.155411 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="051f1bb2-7003-40a8-94f1-dd72ff02b509" containerName="oc" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.155675 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="051f1bb2-7003-40a8-94f1-dd72ff02b509" containerName="oc" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.156504 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.160306 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.160490 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.183696 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6"] Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.289399 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c31e48f-38e0-4475-8740-1fa9a843a241-secret-volume\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.289488 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnlph\" (UniqueName: \"kubernetes.io/projected/3c31e48f-38e0-4475-8740-1fa9a843a241-kube-api-access-fnlph\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.289720 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c31e48f-38e0-4475-8740-1fa9a843a241-config-volume\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.391784 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnlph\" (UniqueName: \"kubernetes.io/projected/3c31e48f-38e0-4475-8740-1fa9a843a241-kube-api-access-fnlph\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.391896 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c31e48f-38e0-4475-8740-1fa9a843a241-config-volume\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.392055 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c31e48f-38e0-4475-8740-1fa9a843a241-secret-volume\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.392992 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c31e48f-38e0-4475-8740-1fa9a843a241-config-volume\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.400527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c31e48f-38e0-4475-8740-1fa9a843a241-secret-volume\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.416350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnlph\" (UniqueName: \"kubernetes.io/projected/3c31e48f-38e0-4475-8740-1fa9a843a241-kube-api-access-fnlph\") pod \"collect-profiles-29565735-v2bh6\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:00 crc kubenswrapper[4792]: I0319 18:15:00.479040 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:01 crc kubenswrapper[4792]: I0319 18:15:01.154234 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6"] Mar 19 18:15:01 crc kubenswrapper[4792]: I0319 18:15:01.780044 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" event={"ID":"3c31e48f-38e0-4475-8740-1fa9a843a241","Type":"ContainerStarted","Data":"fc0ccc7ad3fb9cbfc5a544b2096d1c498be96a852c3f0d38558929856b0e496e"} Mar 19 18:15:01 crc kubenswrapper[4792]: I0319 18:15:01.780751 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" event={"ID":"3c31e48f-38e0-4475-8740-1fa9a843a241","Type":"ContainerStarted","Data":"4ba70e07961d6f0115dbcae31dc2cd90968d711e171ecd08e3a03464d1e2eb62"} Mar 19 18:15:01 crc kubenswrapper[4792]: I0319 18:15:01.818662 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" podStartSLOduration=1.818643697 podStartE2EDuration="1.818643697s" podCreationTimestamp="2026-03-19 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 18:15:01.807383869 +0000 UTC m=+5664.953441409" watchObservedRunningTime="2026-03-19 18:15:01.818643697 +0000 UTC m=+5664.964701237" Mar 19 18:15:02 crc kubenswrapper[4792]: I0319 18:15:02.792319 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c31e48f-38e0-4475-8740-1fa9a843a241" containerID="fc0ccc7ad3fb9cbfc5a544b2096d1c498be96a852c3f0d38558929856b0e496e" exitCode=0 Mar 19 18:15:02 crc kubenswrapper[4792]: I0319 18:15:02.792706 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" event={"ID":"3c31e48f-38e0-4475-8740-1fa9a843a241","Type":"ContainerDied","Data":"fc0ccc7ad3fb9cbfc5a544b2096d1c498be96a852c3f0d38558929856b0e496e"} Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.218471 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.307607 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c31e48f-38e0-4475-8740-1fa9a843a241-secret-volume\") pod \"3c31e48f-38e0-4475-8740-1fa9a843a241\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.307859 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c31e48f-38e0-4475-8740-1fa9a843a241-config-volume\") pod \"3c31e48f-38e0-4475-8740-1fa9a843a241\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.307894 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnlph\" (UniqueName: \"kubernetes.io/projected/3c31e48f-38e0-4475-8740-1fa9a843a241-kube-api-access-fnlph\") pod \"3c31e48f-38e0-4475-8740-1fa9a843a241\" (UID: \"3c31e48f-38e0-4475-8740-1fa9a843a241\") " Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.308899 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c31e48f-38e0-4475-8740-1fa9a843a241-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c31e48f-38e0-4475-8740-1fa9a843a241" (UID: "3c31e48f-38e0-4475-8740-1fa9a843a241"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.314716 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c31e48f-38e0-4475-8740-1fa9a843a241-kube-api-access-fnlph" (OuterVolumeSpecName: "kube-api-access-fnlph") pod "3c31e48f-38e0-4475-8740-1fa9a843a241" (UID: "3c31e48f-38e0-4475-8740-1fa9a843a241"). InnerVolumeSpecName "kube-api-access-fnlph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.316424 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c31e48f-38e0-4475-8740-1fa9a843a241-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c31e48f-38e0-4475-8740-1fa9a843a241" (UID: "3c31e48f-38e0-4475-8740-1fa9a843a241"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.410423 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c31e48f-38e0-4475-8740-1fa9a843a241-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.410465 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c31e48f-38e0-4475-8740-1fa9a843a241-config-volume\") on node \"crc\" DevicePath \"\"" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.410476 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnlph\" (UniqueName: \"kubernetes.io/projected/3c31e48f-38e0-4475-8740-1fa9a843a241-kube-api-access-fnlph\") on node \"crc\" DevicePath \"\"" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.822668 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.822561 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29565735-v2bh6" event={"ID":"3c31e48f-38e0-4475-8740-1fa9a843a241","Type":"ContainerDied","Data":"4ba70e07961d6f0115dbcae31dc2cd90968d711e171ecd08e3a03464d1e2eb62"} Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.826268 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba70e07961d6f0115dbcae31dc2cd90968d711e171ecd08e3a03464d1e2eb62" Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.880190 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm"] Mar 19 18:15:04 crc kubenswrapper[4792]: I0319 18:15:04.889666 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29565690-ntstm"] Mar 19 18:15:05 crc kubenswrapper[4792]: I0319 18:15:05.765415 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4368e7ae-b63b-42a2-87e0-455db4edac41" path="/var/lib/kubelet/pods/4368e7ae-b63b-42a2-87e0-455db4edac41/volumes" Mar 19 18:15:12 crc kubenswrapper[4792]: I0319 18:15:12.524868 4792 scope.go:117] "RemoveContainer" containerID="881bd21c05d24c899b730735d9169ef0acccf40e2b144d985e916b824c71ee89" Mar 19 18:15:50 crc kubenswrapper[4792]: I0319 18:15:50.230554 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:15:50 crc kubenswrapper[4792]: I0319 18:15:50.231250 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.150791 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29565736-kc62n"] Mar 19 18:16:00 crc kubenswrapper[4792]: E0319 18:16:00.152085 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c31e48f-38e0-4475-8740-1fa9a843a241" containerName="collect-profiles" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.152103 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c31e48f-38e0-4475-8740-1fa9a843a241" containerName="collect-profiles" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.152420 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c31e48f-38e0-4475-8740-1fa9a843a241" containerName="collect-profiles" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.153548 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565736-kc62n" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.156889 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-djkrm" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.156938 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.157115 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.162765 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565736-kc62n"] Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.256078 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxcv\" (UniqueName: \"kubernetes.io/projected/7ea7cc16-4d95-4796-8595-1bc0857f114e-kube-api-access-fbxcv\") pod \"auto-csr-approver-29565736-kc62n\" (UID: \"7ea7cc16-4d95-4796-8595-1bc0857f114e\") " pod="openshift-infra/auto-csr-approver-29565736-kc62n" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.359345 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxcv\" (UniqueName: \"kubernetes.io/projected/7ea7cc16-4d95-4796-8595-1bc0857f114e-kube-api-access-fbxcv\") pod \"auto-csr-approver-29565736-kc62n\" (UID: \"7ea7cc16-4d95-4796-8595-1bc0857f114e\") " pod="openshift-infra/auto-csr-approver-29565736-kc62n" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.382873 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxcv\" (UniqueName: \"kubernetes.io/projected/7ea7cc16-4d95-4796-8595-1bc0857f114e-kube-api-access-fbxcv\") pod \"auto-csr-approver-29565736-kc62n\" (UID: \"7ea7cc16-4d95-4796-8595-1bc0857f114e\") " pod="openshift-infra/auto-csr-approver-29565736-kc62n" Mar 19 18:16:00 crc kubenswrapper[4792]: I0319 18:16:00.477733 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565736-kc62n" Mar 19 18:16:01 crc kubenswrapper[4792]: I0319 18:16:01.106193 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29565736-kc62n"] Mar 19 18:16:01 crc kubenswrapper[4792]: W0319 18:16:01.109363 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea7cc16_4d95_4796_8595_1bc0857f114e.slice/crio-9adc731549f48a60bcab545b688846ebf333937c73e7504116c14ad32130d5d2 WatchSource:0}: Error finding container 9adc731549f48a60bcab545b688846ebf333937c73e7504116c14ad32130d5d2: Status 404 returned error can't find the container with id 9adc731549f48a60bcab545b688846ebf333937c73e7504116c14ad32130d5d2 Mar 19 18:16:01 crc kubenswrapper[4792]: I0319 18:16:01.114875 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 18:16:01 crc kubenswrapper[4792]: I0319 18:16:01.484792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565736-kc62n" event={"ID":"7ea7cc16-4d95-4796-8595-1bc0857f114e","Type":"ContainerStarted","Data":"9adc731549f48a60bcab545b688846ebf333937c73e7504116c14ad32130d5d2"} Mar 19 18:16:02 crc kubenswrapper[4792]: I0319 18:16:02.499208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565736-kc62n" event={"ID":"7ea7cc16-4d95-4796-8595-1bc0857f114e","Type":"ContainerStarted","Data":"2d69e444410fe0a38562b5af80fb0775bf6c57317e4c524a167452010a5bffd8"} Mar 19 18:16:02 crc kubenswrapper[4792]: I0319 18:16:02.526561 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29565736-kc62n" podStartSLOduration=1.618696293 podStartE2EDuration="2.526541808s" podCreationTimestamp="2026-03-19 18:16:00 +0000 UTC" firstStartedPulling="2026-03-19 18:16:01.113821712 +0000 UTC m=+5724.259879302" lastFinishedPulling="2026-03-19 18:16:02.021667267 +0000 UTC m=+5725.167724817" observedRunningTime="2026-03-19 18:16:02.516952136 +0000 UTC m=+5725.663009676" watchObservedRunningTime="2026-03-19 18:16:02.526541808 +0000 UTC m=+5725.672599348" Mar 19 18:16:03 crc kubenswrapper[4792]: I0319 18:16:03.511247 4792 generic.go:334] "Generic (PLEG): container finished" podID="7ea7cc16-4d95-4796-8595-1bc0857f114e" containerID="2d69e444410fe0a38562b5af80fb0775bf6c57317e4c524a167452010a5bffd8" exitCode=0 Mar 19 18:16:03 crc kubenswrapper[4792]: I0319 18:16:03.511305 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565736-kc62n" event={"ID":"7ea7cc16-4d95-4796-8595-1bc0857f114e","Type":"ContainerDied","Data":"2d69e444410fe0a38562b5af80fb0775bf6c57317e4c524a167452010a5bffd8"} Mar 19 18:16:05 crc kubenswrapper[4792]: I0319 18:16:05.537020 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29565736-kc62n" event={"ID":"7ea7cc16-4d95-4796-8595-1bc0857f114e","Type":"ContainerDied","Data":"9adc731549f48a60bcab545b688846ebf333937c73e7504116c14ad32130d5d2"} Mar 19 18:16:05 crc kubenswrapper[4792]: I0319 18:16:05.537465 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9adc731549f48a60bcab545b688846ebf333937c73e7504116c14ad32130d5d2" Mar 19 18:16:05 crc kubenswrapper[4792]: I0319 18:16:05.698890 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565736-kc62n" Mar 19 18:16:05 crc kubenswrapper[4792]: I0319 18:16:05.810257 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbxcv\" (UniqueName: \"kubernetes.io/projected/7ea7cc16-4d95-4796-8595-1bc0857f114e-kube-api-access-fbxcv\") pod \"7ea7cc16-4d95-4796-8595-1bc0857f114e\" (UID: \"7ea7cc16-4d95-4796-8595-1bc0857f114e\") " Mar 19 18:16:05 crc kubenswrapper[4792]: I0319 18:16:05.820393 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea7cc16-4d95-4796-8595-1bc0857f114e-kube-api-access-fbxcv" (OuterVolumeSpecName: "kube-api-access-fbxcv") pod "7ea7cc16-4d95-4796-8595-1bc0857f114e" (UID: "7ea7cc16-4d95-4796-8595-1bc0857f114e"). InnerVolumeSpecName "kube-api-access-fbxcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 18:16:05 crc kubenswrapper[4792]: I0319 18:16:05.913703 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbxcv\" (UniqueName: \"kubernetes.io/projected/7ea7cc16-4d95-4796-8595-1bc0857f114e-kube-api-access-fbxcv\") on node \"crc\" DevicePath \"\"" Mar 19 18:16:06 crc kubenswrapper[4792]: I0319 18:16:06.547652 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29565736-kc62n" Mar 19 18:16:06 crc kubenswrapper[4792]: I0319 18:16:06.778285 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29565730-mnd97"] Mar 19 18:16:06 crc kubenswrapper[4792]: I0319 18:16:06.789928 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29565730-mnd97"] Mar 19 18:16:07 crc kubenswrapper[4792]: I0319 18:16:07.754882 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4db900e0-0a74-41ca-b587-5d2623a492e3" path="/var/lib/kubelet/pods/4db900e0-0a74-41ca-b587-5d2623a492e3/volumes" Mar 19 18:16:12 crc kubenswrapper[4792]: I0319 18:16:12.640863 4792 scope.go:117] "RemoveContainer" containerID="de2f68e393ea1e9cfce0e834685ad7ad38850d979530524cae69e22177f779cc" Mar 19 18:16:20 crc kubenswrapper[4792]: I0319 18:16:20.230793 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szhln container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 19 18:16:20 crc kubenswrapper[4792]: I0319 18:16:20.231286 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szhln" podUID="a9e72e9a-50c3-41db-8657-7ae683c7c13a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"